In this article, we derive an explicit formula for computing confidence interval for the mean of a bounded random variable. Moreover, we have developed multistage point estimation methods for estimating the mean value with prescribed precision and confidence level based on the proposed confidence interval.
In this paper, we develop a multistage approach for estimating the mean of a bounded variable. We first focus on the multistage estimation of a binomial parameter and then generalize the estimation methods to the case of general bounded random variables. A fundamental connection between a binomial parameter and the mean of a bounded variable is established. Our multistage estimation methods rigorously guarantee prescribed levels of precision and confidence.
In this paper, we develop a general approach for probabilistic estimation and optimization. An explicit formula and a computational approach are established for controlling the reliability of probabilistic estimation based on a mixed criterion of absolute and relative errors. By employing the Chernoff-Hoeffding bound and the concept of sampling, the minimization of a probabilistic function is transformed into an optimization problem amenable for gradient descendent algorithms.
Many inference problems, such as sequential decision problems like A/B testing, adaptive sampling schemes like bandit selection, are often online in nature. The fundamental problem for online inference is to provide a sequence of confidence intervals that are valid uniformly over the growing-into-infinity sample sizes. To address this question, we provide a near-optimal confidence sequence for bounded random variables by utilizing Bentkus concentration results. We show that it improves on the existing approaches that use the Cram{e}r-Chernoff technique such as the Hoeffding, Bernstein, and Bennett inequalities. The resulting confidence sequence is confirmed to be favorable in both synthetic coverage problems and an application to adaptive stopping algorithms.
We study the problem of estimating the mean of a multivariatedistribution based on independent samples. The main result is the proof of existence of an estimator with a non-asymptotic sub-Gaussian performance for all distributions satisfying some mild moment assumptions.
We introduce the ARMA (autoregressive-moving-average) point process, which is a Hawkes process driven by a Neyman-Scott process with Poisson immigration. It contains both the Hawkes and Neyman-Scott process as special cases and naturally combines self-exciting and shot-noise cluster mechanisms, useful in a variety of applications. The name ARMA is used because the ARMA point process is an appropriate analogue of the ARMA time series model for integer-valued series. As such, the ARMA point process framework accommodates a flexible family of models sharing methodological and mathematical similarities with ARMA time series. We derive an estimation procedure for ARMA point processes, as well as the integer ARMA models, based on an MCEM (Monte Carlo Expectation Maximization) algorithm. This powerful framework for estimation accommodates trends in immigration, multiple parametric specifications of excitement functions, as well as cases where marks and immigrants are not observed.