ترغب بنشر مسار تعليمي؟ اضغط هنا

Recursive computation for evaluating the exact $p$-values of temporal and spatial scan statistics

229   0   0.0 ( 0 )
 نشر من قبل Satoshi Kuriki
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Let $V$ be a finite set of indices, and let $B_i$, $i=1,ldots,m$, be subsets of $V$ such that $V=bigcup_{i=1}^{m}B_i$. Let $X_i$, $iin V$, be independent random variables, and let $X_{B_i}=(X_j)_{jin B_i}$. In this paper, we propose a recursive computation method to calculate the conditional expectation $Ebigl[prod_{i=1}^mchi_i(X_{B_i}) ,|, Nbigr]$ with $N=sum_{iin V}X_i$ given, where $chi_i$ is an arbitrary function. Our method is based on the recursive summation/integration technique using the Markov property in statistics. To extract the Markov property, we define an undirected graph whose cliques are $B_j$, and obtain its chordal extension, from which we present the expressions of the recursive formula. This methodology works for a class of distributions including the Poisson distribution (that is, the conditional distribution is the multinomial). This problem is motivated from the evaluation of the multiplicity-adjusted $p$-value of scan statistics in spatial epidemiology. As an illustration of the approach, we present the real data analyses to detect temporal and spatial clustering.



قيم البحث

اقرأ أيضاً

Approximate Bayesian computation (ABC) or likelihood-free inference algorithms are used to find approximations to posterior distributions without making explicit use of the likelihood function, depending instead on simulation of sample data sets from the model. In this paper we show that under the assumption of the existence of a uniform additive model error term, ABC algorithms give exact results when sufficient summaries are used. This interpretation allows the approximation made in many previous application papers to be understood, and should guide the choice of metric and tolerance in future work. ABC algorithms can be generalized by replacing the 0-1 cut-off with an acceptance probability that varies with the distance of the simulated data from the observed data. The acceptance density gives the distribution of the error term, enabling the uniform error usually used to be replaced by a general distribution. This generalization can also be applied to approximate Markov chain Monte Carlo algorithms. In light of this work, ABC algorithms can be seen as calibration techniques for implicit stochastic models, inferring parameter values in light of the computer model, data, prior beliefs about the parameter values, and any measurement or model errors.
123 - Lei Gong , James M. Flegal 2014
A current challenge for many Bayesian analyses is determining when to terminate high-dimensional Markov chain Monte Carlo simulations. To this end, we propose using an automated sequential stopping procedure that terminates the simulation when the co mputational uncertainty is small relative to the posterior uncertainty. Such a stopping rule has previously been shown to work well in settings with posteriors of moderate dimension. In this paper, we illustrate its utility in high-dimensional simulations while overcoming some current computational issues. Further, we investigate the relationship between the stopping rule and effective sample size. As examples, we consider two complex Bayesian analyses on spatially and temporally correlated datasets. The first involves a dynamic space-time model on weather station data and the second a spatial variable selection model on fMRI brain imaging data. Our results show the sequential stopping rule is easy to implement, provides uncertainty estimates, and performs well in high-dimensional settings.
Approximate Bayesian computation (ABC) is computationally intensive for complex model simulators. To exploit expensive simulations, data-resampling via bootstrapping can be employed to obtain many artificial datasets at little cost. However, when usi ng this approach within ABC, the posterior variance is inflated, thus resulting in biased posterior inference. Here we use stratified Monte Carlo to considerably reduce the bias induced by data resampling. We also show empirically that it is possible to obtain reliable inference using a larger than usual ABC threshold. Finally, we show that with stratified Monte Carlo we obtain a less variable ABC likelihood. Ultimately we show how our approach improves the computational efficiency of the ABC samplers. We construct several ABC samplers employing our methodology, such as rejection and importance ABC samplers, and ABC-MCMC samplers. We consider simulation studies for static (Gaussian, g-and-k distribution, Ising model, astronomical model) and dynamic models (Lotka-Volterra). We compare against state-of-art sequential Monte Carlo ABC samplers, synthetic likelihoods, and likelihood-free Bayesian optimization. For a computationally expensive Lotka-Volterra case study, we found that our strategy leads to a more than 10-fold computational saving, compared to a sampler that does not use our novel approach.
In this paper, we show how a complete and exact Bayesian analysis of a parametric mixture model is possible in some cases when components of the mixture are taken from exponential families and when conjugate priors are used. This restricted set-up al lows us to show the relevance of the Bayesian approach as well as to exhibit the limitations of a complete analysis, namely that it is impossible to conduct this analysis when the sample size is too large, when the data are not from an exponential family, or when priors that are more complex than conjugate priors are used.
Bayesian inference of Gibbs random fields (GRFs) is often referred to as a doubly intractable problem, since the likelihood function is intractable. The exploration of the posterior distribution of such models is typically carried out with a sophisti cated Markov chain Monte Carlo (MCMC) method, the exchange algorithm (Murray et al., 2006), which requires simulations from the likelihood function at each iteration. The purpose of this paper is to consider an approach to dramatically reduce this computational overhead. To this end we introduce a novel class of algorithms which use realizations of the GRF model, simulated offline, at locations specified by a grid that spans the parameter space. This strategy speeds up dramatically the posterior inference, as illustrated on several examples. However, using the pre-computed graphs introduces a noise in the MCMC algorithm, which is no longer exact. We study the theoretical behaviour of the resulting approximate MCMC algorithm and derive convergence bounds using a recent theoretical development on approximate MCMC methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا