The question of exclusion region construction in new phenomenon searches has been causing considerable discussions for many years and yet no clear mathematical definition of the problem has been stated so far. In this paper we formulate the problem in mathematical terms and propose a solution to the problem within the framework of statistical tests. The proposed solution avoids problems of the currently used procedures.
The projected discovery and exclusion capabilities of particle physics and astrophysics/cosmology experiments are often quantified using the median expected $p$-value or its corresponding significance. We argue that this criterion leads to flawed res
ults, which for example can counterintuitively project lessened sensitivities if the experiment takes more data or reduces its background. We discuss the merits of several alternatives to the median expected significance, both when the background is known and when it is subject to some uncertainty. We advocate for standard use of the exact Asimov significance $Z^{rm A}$ detailed in this paper.
This article presents a derivation of analytical predictions for steady-state distributions of netto time gaps among clusters of vehicles moving inside a traffic stream. Using the thermodynamic socio-physical traffic model with short-ranged repulsion
between particles (originally introduced in [Physica A textbf{333} (2004) 370]) we firstly derive the time-clearance distribution in the model. Consecutively, the statistical distributions for the so-called time multi-clearances are calculated by means of theory of functional convolutions. Moreover, all the theoretical surmises used during the above-mentioned calculations are proven by the statistical analysis of traffic data. The mathematical predictions acquired in this paper are thoroughly compared with relevant empirical quantities and discussed in the context of three-phase traffic theory.
We propose a method for setting limits that avoids excluding parameter values for which the sensitivity falls below a specified threshold. These power-constrained limits (PCL) address the issue that motivated the widely used CLs procedure, but do so
in a way that makes more transparent the properties of the statistical test to which each value of the parameter is subjected. A case of particular interest is for upper limits on parameters that are proportional to the cross section of a process whose existence is not yet established. The basic idea of the power constraint can easily be applied, however, to other types of limits.
Scaling regions -- intervals on a graph where the dependent variable depends linearly on the independent variable -- abound in dynamical systems, notably in calculations of invariants like the correlation dimension or a Lyapunov exponent. In these ap
plications, scaling regions are generally selected by hand, a process that is subjective and often challenging due to noise, algorithmic effects, and confirmation bias. In this paper, we propose an automated technique for extracting and characterizing such regions. Starting with a two-dimensional plot -- e.g., the values of the correlation integral, calculated using the Grassberger-Procaccia algorithm over a range of scales -- we create an ensemble of intervals by considering all possible combinations of endpoints, generating a distribution of slopes from least-squares fits weighted by the length of the fitting line and the inverse square of the fit error. The mode of this distribution gives an estimate of the slope of the scaling region (if it exists). The endpoints of the intervals that correspond to the mode provide an estimate for the extent of that region. When there is no scaling region, the distributions will be wide and the resulting error estimates for the slope will be large. We demonstrate this method for computations of dimension and Lyapunov exponent for several dynamical systems, and show that it can be useful in selecting values for the parameters in time-delay reconstructions.
In neutral meson mixing, a certain class of convolution integrals is required whose solution involves the error function $mathrm{erf}(z)$ of a complex argument $z$. We show the the general shape of the analytic solution of these integrals, and give e
xpressions which allow the normalisation of these expressions for use in probability density functions. Furthermore, we derive expressions which allow a (decay time) acceptance to be included in these integrals, or allow the calculation of moments. We also describe the implementation of numerical routines which allow the numerical evaluation of $w(z)=e^{-z^2}(1-mathrm{erf}(-iz))$, sometimes also called Faddeeva function, in C++. These new routines improve over the old CERNLIB routine(s) WWERF/CWERF in terms of both speed and accuracy. These new routines are part of the RooFit package, and have been distributed with it since ROOT version 5.34/08.