Do you want to publish a course? Click here

MCMC Confidence Intervals and Biases

81   0   0.0 ( 0 )
 Added by Tong Liu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The recent paper Simple confidence intervals for MCMC without CLTs by J.S. Rosenthal, showed the derivation of a simple MCMC confidence interval using only Chebyshevs inequality, not CLT. That result required certain assumptions about how the estimator bias and variance grow with the number of iterations $n$. In particular, the bias is $o(1/sqrt{n})$. This assumption seemed mild. It is generally believed that the estimator bias will be $O(1/n)$ and hence $o(1/sqrt{n})$. However, questions were raised by researchers about how to verify this assumption. Indeed, we show that this assumption might not always hold. In this paper, we seek to simplify and weaken the assumptions in the previously mentioned paper, to make MCMC confidence intervals without CLTs more widely applicable.



rate research

Read More

Consider X_1,X_2,...,X_n that are independent and identically N(mu,sigma^2) distributed. Suppose that we have uncertain prior information that mu = 0. We answer the question: to what extent can a frequentist 1-alpha confidence interval for mu utilize this prior information?
116 - Weizhen Wang 2021
We introduce a general method, named the h-function method, to unify the constructions of level-alpha exact test and 1-alpha exact confidence interval. Using this method, any confidence interval is improved as follows: i) an approximate interval, including a point estimator, is modified to an exact interval; ii) an exact interval is refined to be an interval that is a subset of the previous one. Two real datasets are used to illustrate the method.
In this paper, we show how concentration inequalities for Gaussian quadratic form can be used to propose exact confidence intervals of the Hurst index parametrizing a fractional Brownian motion. Both cases where the scaling parameter of the fractional Brownian motion is known or unknown are investigated. These intervals are obtained by observing a single discretized sample path of a fractional Brownian motion and without any assumption on the parameter $H$.
We use a logical device called the Dutch Book to establish epistemic confidence, defined as the sense of confidence emph{in an observed} confidence interval. This epistemic property is unavailable -- or even denied -- in orthodox frequentist inference. In financial markets, including the betting market, the Dutch Book is also known as arbitrage or risk-free profitable transaction. A numerical confidence is deemed epistemic if its use as a betting price is protected from the Dutch Book by an external agent. Theoretically, to construct the Dutch Book, the agent must exploit unused information available in any relevant subset. Pawitan and Lee (2021) showed that confidence is an extended likelihood, and the likelihood principle states that the likelihood contains all the information in the data, hence leaving no relevant subset. Intuitively, this implies that confidence associated with the full likelihood is protected from the Dutch Book, and hence is epistemic. Our aim is to provide the theoretical support for this intuitive notion.
91 - Qian Qin , Galin L. Jones 2020
Component-wise MCMC algorithms, including Gibbs and conditional Metropolis-Hastings samplers, are commonly used for sampling from multivariate probability distributions. A long-standing question regarding Gibbs algorithms is whether a deterministic-scan (systematic-scan) sampler converges faster than its random-scan counterpart. We answer this question when the samplers involve two components by establishing an exact quantitative relationship between the $L^2$ convergence rates of the two samplers. The relationship shows that the deterministic-scan sampler converges faster. We also establish qualitative relations among the convergence rates of two-component Gibbs samplers and some conditional Metropolis-Hastings variants. For instance, it is shown that if some two-component conditional Metropolis-Hastings samplers are geometrically ergodic, then so are the associated Gibbs samplers.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا