Do you want to publish a course? Click here

Towards Practical Mean Bounds for Small Samples

68   0   0.0 ( 0 )
 Added by My Phan
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Historically, to bound the mean for small sample sizes, practitioners have had to choose between using methods with unrealistic assumptions about the unknown distribution (e.g., Gaussianity) and methods like Hoeffdings inequality that use weaker assumptions but produce much looser (wider) intervals. In 1969, Anderson (1969) proposed a mean confidence interval strictly better than or equal to Hoeffdings whose only assumption is that the distributions support is contained in an interval $[a,b]$. For the first time since then, we present a new family of bounds that compares favorably to Andersons. We prove that each bound in the family has {em guaranteed coverage}, i.e., it holds with probability at least $1-alpha$ for all distributions on an interval $[a,b]$. Furthermore, one of the bounds is tighter than or equal to Andersons for all samples. In simulations, we show that for many distributions, the gain over Andersons bound is substantial.



rate research

Read More

We establish exponential bounds for the hypergeometric distribution which include a finite sampling correction factor, but are otherwise analogous to bounds for the binomial distribution due to Leon and Perron (2003) and Talagrand (1994). We also establish a convex ordering for sampling without replacement from populations of real numbers between zero and one: a population of all zeros or ones (and hence yielding a hypergeometric distribution in the upper bound) gives the extreme case.
In this paper, we consider the information content of maximum ranked set sampling procedure with unequal samples (MRSSU) in terms of Tsallis entropy which is a nonadditive generalization of Shannon entropy. We obtain several results of Tsallis entropy including bounds, monotonic properties, stochastic orders, and sharp bounds under some assumptions. We also compare the uncertainty and information content of MRSSU with its counterpart in the simple random sampling (SRS) data. Finally, we develop some characterization results in terms of cumulative Tsallis entropy and residual Tsallis entropy of MRSSU and SRS data.
We consider the problem of finding confidence intervals for the risk of forecasting the future of a stationary, ergodic stochastic process, using a model estimated from the past of the process. We show that a bootstrap procedure provides valid confidence intervals for the risk, when the data source is sufficiently mixing, and the loss function and the estimator are suitably smooth. Autoregressive (AR(d)) models estimated by least squares obey the necessary regularity conditions, even when mis-specified, and simulations show that the finite- sample coverage of our bounds quickly converges to the theoretical, asymptotic level. As an intermediate step, we derive sufficient conditions for asymptotic independence between empirical distribution functions formed by splitting a realization of a stochastic process, of independent interest.
This paper deals with a new Bayesian approach to the standard one-sample $z$- and $t$- tests. More specifically, let $x_1,ldots,x_n$ be an independent random sample from a normal distribution with mean $mu$ and variance $sigma^2$. The goal is to test the null hypothesis $mathcal{H}_0: mu=mu_1$ against all possible alternatives. The approach is based on using the well-known formula of the Kullbak-Leibler divergence between two normal distributions (sampling and hypothesized distributions selected in an appropriate way). The change of the distance from a priori to a posteriori is compared through the relative belief ratio (a measure of evidence). Eliciting the prior, checking for prior-data conflict and bias are also considered. Many theoretical properties of the procedure have been developed. Besides its simplicity, and unlike the classical approach, the new approach possesses attractive and distinctive features such as giving evidence in favor of the null hypothesis. It also avoids several undesirable paradoxes, such as Lindleys paradox that may be encountered by some existing Bayesian methods. The use of the approach has been illustrated through several examples.
Consider X_1,X_2,...,X_n that are independent and identically N(mu,sigma^2) distributed. Suppose that we have uncertain prior information that mu = 0. We answer the question: to what extent can a frequentist 1-alpha confidence interval for mu utilize this prior information?
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا