ترغب بنشر مسار تعليمي؟ اضغط هنا

Exact Bayesian Inference for the Bingham Distribution

152   0   0.0 ( 0 )
 نشر من قبل Theodore Kypraios
 تاريخ النشر 2014
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper is concerned with making Bayesian inference from data that are assumed to be drawn from a Bingham distribution. A barrier to the Bayesian approach is the parameter-dependent normalising constant of the Bingham distribution, which, even when it can be evaluated or accurately approximated, would have to be calculated at each iteration of an MCMC scheme, thereby greatly increasing the computational burden. We propose a method which enables exact (in Monte Carlo sense) Bayesian inference for the unknown parameters of the Bingham distribution by completely avoiding the need to evaluate this constant. We apply the method to simulated and real data, and illustrate that it is simpler to implement, faster, and performs better than an alternative algorithm that has recently been proposed in the literature.



قيم البحث

اقرأ أيضاً

In recent years dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However it is often computationally unfeasible to apply exact statistical methodologies in the context of large datasets a nd complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An Approximate Bayesian Computation (ABC) MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of subsamples from the assumed data generating model as well as a so-called early rejection strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered setup. Finally the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.
Many modern statistical applications involve inference for complicated stochastic models for which the likelihood function is difficult or even impossible to calculate, and hence conventional likelihood-based inferential echniques cannot be used. In such settings, Bayesian inference can be performed using Approximate Bayesian Computation (ABC). However, in spite of many recent developments to ABC methodology, in many applications the computational cost of ABC necessitates the choice of summary statistics and tolerances that can potentially severely bias the estimate of the posterior. We propose a new piecewise ABC approach suitable for discretely observed Markov models that involves writing the posterior density of the parameters as a product of factors, each a function of only a subset of the data, and then using ABC within each factor. The approach has the advantage of side-stepping the need to choose a summary statistic and it enables a stringent tolerance to be set, making the posterior less approximate. We investigate two methods for estimating the posterior density based on ABC samples for each of the factors: the first is to use a Gaussian approximation for each factor, and the second is to use a kernel density estimate. Both methods have their merits. The Gaussian approximation is simple, fast, and probably adequate for many applications. On the other hand, using instead a kernel density estimate has the benefit of consistently estimating the true ABC posterior as the number of ABC samples tends to infinity. We illustrate the piecewise ABC approach for three examples; in each case, the approach enables exact matching between simulations and data and offers fast and accurate inference.
We present a Bayesian inference approach to estimating the cumulative mass profile and mean squared velocity profile of a globular cluster given the spatial and kinematic information of its stars. Mock globular clusters with a range of sizes and conc entrations are generated from lowered isothermal dynamical models, from which we test the reliability of the Bayesian method to estimate model parameters through repeated statistical simulation. We find that given unbiased star samples, we are able to reconstruct the cluster parameters used to generate the mock cluster and the clusters cumulative mass and mean velocity squared profiles with good accuracy. We further explore how strongly biased sampling, which could be the result of observing constraints, may affect this approach. Our tests indicate that if we instead have biased samples, then our estimates can be off in certain ways that are dependent on cluster morphology. Overall, our findings motivate obtaining samples of stars that are as unbiased as possible. This may be achieved by combining information from multiple telescopes (e.g., Hubble and Gaia), but will require careful modeling of the measurement uncertainties through a hierarchical model, which we plan to pursue in future work.
182 - Daniel Yekutieli 2011
We address the problem of providing inference from a Bayesian perspective for parameters selected after viewing the data. We present a Bayesian framework for providing inference for selected parameters, based on the observation that providing Bayesia n inference for selected parameters is a truncated data problem. We show that if the prior for the parameter is non-informative, or if the parameter is a fixed unknown constant, then it is necessary to adjust the Bayesian inference for selection. Our second contribution is the introduction of Bayesian False Discovery Rate controlling methodology,which generalizes existing Bayesian FDR methods that are only defined in the two-group mixture model.We illustrate our results by applying them to simulated data and data froma microarray experiment.
In this paper, we show how a complete and exact Bayesian analysis of a parametric mixture model is possible in some cases when components of the mixture are taken from exponential families and when conjugate priors are used. This restricted set-up al lows us to show the relevance of the Bayesian approach as well as to exhibit the limitations of a complete analysis, namely that it is impossible to conduct this analysis when the sample size is too large, when the data are not from an exponential family, or when priors that are more complex than conjugate priors are used.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا