ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Bayesian Intensity Estimation for the Permanental Process

82   0   0.0 ( 0 )
 نشر من قبل Christian Walder Dr
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The Cox process is a stochastic process which generalises the Poisson process by letting the underlying intensity function itself be a stochastic process. In this paper we present a fast Bayesian inference scheme for the permanental process, a Cox process under which the square root of the intensity is a Gaussian process. In particular we exploit connections with reproducing kernel Hilbert spaces, to derive efficient approximate Bayesian inference algorithms based on the Laplace approximation to the predictive distribution and marginal likelihood. We obtain a simple algorithm which we apply to toy and real-world problems, obtaining orders of magnitude speed improvements over previous work.



قيم البحث

اقرأ أيضاً

Estimating the first-order intensity function in point pattern analysis is an important problem, and it has been approached so far from different perspectives: parametrically, semiparametrically or nonparametrically. Our approach is close to a semipa rametric one. Motivated by eye-movement data, we introduce a convolution type model where the log-intensity is modelled as the convolution of a function $beta(cdot)$, to be estimated, and a single spatial covariate (the image an individual is looking at for eye-movement data). Based on a Fourier series expansion, we show that the proposed model is related to the log-linear model with infinite number of coefficients, which correspond to the spectral decomposition of $beta(cdot)$. After truncation, we estimate these coefficients through a penalized Poisson likelihood and prove infill asymptotic results for a large class of spatial point processes. We illustrate the efficiency of the proposed methodology on simulated data and real data.
Non-homogeneous Poisson processes are used in a wide range of scientific disciplines, ranging from the environmental sciences to the health sciences. Often, the central object of interest in a point process is the underlying intensity function. Here, we present a general model for the intensity function of a non-homogeneous Poisson process using measure transport. The model is built from a flexible bijective mapping that maps from the underlying intensity function of interest to a simpler reference intensity function. We enforce bijectivity by modeling the map as a composition of multiple simple bijective maps, and show that the model exhibits an important approximation property. Estimation of the flexible mapping is accomplished within an optimization framework, wherein computations are efficiently done using recent technological advances in deep learning and a graphics processing unit. Although we find that intensity function estimates obtained with our method are not necessarily superior to those obtained using conventional methods, the modeling representation brings with it other advantages such as facilitated point process simulation and uncertainty quantification. Modeling point processes in higher dimensions is also facilitated using our approach. We illustrate the use of our model on both simulated data, and a real data set containing the locations of seismic events near Fiji since 1964.
291 - Yunbo Ouyang , Feng Liang 2017
A nonparametric Bayes approach is proposed for the problem of estimating a sparse sequence based on Gaussian random variables. We adopt the popular two-group prior with one component being a point mass at zero, and the other component being a mixture of Gaussian distributions. Although the Gaussian family has been shown to be suboptimal for this problem, we find that Gaussian mixtures, with a proper choice on the means and mixing weights, have the desired asymptotic behavior, e.g., the corresponding posterior concentrates on balls with the desired minimax rate. To achieve computation efficiency, we propose to obtain the posterior distribution using a deterministic variational algorithm. Empirical studies on several benchmark data sets demonstrate the superior performance of the proposed algorithm compared to other alternatives.
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum $gamma$-divergence estimator is well-known to work well estimation against heavy contamination. The robust Bayesian methods by usin g quasi-posterior distributions based on divergences have been also proposed in recent years. In objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior distribution based on the $gamma$-divergence. In particular, we show that the proposed priors are approximately robust under the condition on the contamination distribution without assuming any conditions on the contamination ratio. Some simulation studies are also presented.
We consider exact algorithms for Bayesian inference with model selection priors (including spike-and-slab priors) in the sparse normal sequence model. Because the best existing exact algorithm becomes numerically unstable for sample sizes over n=500, there has been much attention for alternative approaches like approximate algorithms (Gibbs sampling, variational Bayes, etc.), shrinkage priors (e.g. the Horseshoe prior and the Spike-and-Slab LASSO) or empirical Bayesian methods. However, by introducing algorithmic ideas from online sequential prediction, we show that exact calculations are feasible for much larger sample sizes: for general model selection priors we reach n=25000, and for certain spike-and-slab priors we can easily reach n=100000. We further prove a de Finetti-like result for finite sample sizes that characterizes exactly which model selection priors can be expressed as spike-and-slab priors. The computational speed and numerical accuracy of the proposed methods are demonstrated in experiments on simulated data, on a differential gene expression data set, and to compare the effect of multiple hyper-parameter settings in the beta-binomial prior. In our experimental evaluation we compute guaranteed bounds on the numerical accuracy of all new algorithms, which shows that the proposed methods are numerically reliable whereas an alternative based on long division is not.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا