Do you want to publish a course? Click here

A sparse linear algebra algorithm for fast computation of prediction variances with Gaussian Markov random fields

68   0   0.0 ( 0 )
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Gaussian Markov random fields are used in a large number of disciplines in machine vision and spatial statistics. The models take advantage of sparsity in matrices introduced through the Markov assumptions, and all operations in inference and prediction use sparse linear algebra operations that scale well with dimensionality. Yet, for very high-dimensional models, exact computation of predictive variances of linear combinations of variables is generally computationally prohibitive, and approximate methods (generally interpolation or conditional simulation) are typically used instead. A set of conditions are established under which the variances of linear combinations of random variables can be computed exactly using the Takahashi recursions. The ensuing computational simplification has wide applicability and may be used to enhance several software packages where model fitting is seated in a maximum-likelihood framework. The resulting algorithm is ideal for use in a variety of spatial statistical applications, including emph{LatticeKrig} modelling, statistical downscaling, and fixed rank kriging. It can compute hundreds of thousands exact predictive variances of linear combinations on a standard desktop with ease, even when large spatial GMRF models are used.

rate research

Read More

Bayesian inference of Gibbs random fields (GRFs) is often referred to as a doubly intractable problem, since the likelihood function is intractable. The exploration of the posterior distribution of such models is typically carried out with a sophisticated Markov chain Monte Carlo (MCMC) method, the exchange algorithm (Murray et al., 2006), which requires simulations from the likelihood function at each iteration. The purpose of this paper is to consider an approach to dramatically reduce this computational overhead. To this end we introduce a novel class of algorithms which use realizations of the GRF model, simulated offline, at locations specified by a grid that spans the parameter space. This strategy speeds up dramatically the posterior inference, as illustrated on several examples. However, using the pre-computed graphs introduces a noise in the MCMC algorithm, which is no longer exact. We study the theoretical behaviour of the resulting approximate MCMC algorithm and derive convergence bounds using a recent theoretical development on approximate MCMC methods.
Many modern statistical applications involve inference for complicated stochastic models for which the likelihood function is difficult or even impossible to calculate, and hence conventional likelihood-based inferential echniques cannot be used. In such settings, Bayesian inference can be performed using Approximate Bayesian Computation (ABC). However, in spite of many recent developments to ABC methodology, in many applications the computational cost of ABC necessitates the choice of summary statistics and tolerances that can potentially severely bias the estimate of the posterior. We propose a new piecewise ABC approach suitable for discretely observed Markov models that involves writing the posterior density of the parameters as a product of factors, each a function of only a subset of the data, and then using ABC within each factor. The approach has the advantage of side-stepping the need to choose a summary statistic and it enables a stringent tolerance to be set, making the posterior less approximate. We investigate two methods for estimating the posterior density based on ABC samples for each of the factors: the first is to use a Gaussian approximation for each factor, and the second is to use a kernel density estimate. Both methods have their merits. The Gaussian approximation is simple, fast, and probably adequate for many applications. On the other hand, using instead a kernel density estimate has the benefit of consistently estimating the true ABC posterior as the number of ABC samples tends to infinity. We illustrate the piecewise ABC approach for three examples; in each case, the approach enables exact matching between simulations and data and offers fast and accurate inference.
We consider Gaussian measures $mu, tilde{mu}$ on a separable Hilbert space, with fractional-order covariance operators $A^{-2beta}$ resp. $tilde{A}^{-2tilde{beta}}$, and derive necessary and sufficient conditions on $A, tilde{A}$ and $beta, tilde{beta} > 0$ for I. equivalence of the measures $mu$ and $tilde{mu}$, and II. uniform asymptotic optimality of linear predictions for $mu$ based on the misspecified measure $tilde{mu}$. These results hold, e.g., for Gaussian processes on compact metric spaces. As an important special case, we consider the class of generalized Whittle-Matern Gaussian random fields, where $A$ and $tilde{A}$ are elliptic second-order differential operators, formulated on a bounded Euclidean domain $mathcal{D}subsetmathbb{R}^d$ and augmented with homogeneous Dirichlet boundary conditions. Our outcomes explain why the predictive performances of stationary and non-stationary models in spatial statistics often are comparable, and provide a crucial first step in deriving consistency results for parameter estimation of generalized Whittle-Matern fields.
We provide a method for fast and exact simulation of Gaussian random fields on spheres having isotropic covariance functions. The method proposed is then extended to Gaussian random fields defined over spheres cross time and having covariance functions that depend on geodesic distance in space and on temporal separation. The crux of the method is in the use of block circulant matrices obtained working on regular grids defined over longitude $times$ latitude.
Markov chain Monte Carlo is a widely-used technique for generating a dependent sequence of samples from complex distributions. Conventionally, these methods require a source of independent random variates. Most implementations use pseudo-random numbers instead because generating true independent variates with a physical system is not straightforward. In this paper we show how to modify some commonly used Markov chains to use a dependent stream of random numbers in place of independent uniform variates. The resulting Markov chains have the correct invariant distribution without requiring detailed knowledge of the streams dependencies or even its marginal distribution. As a side-effect, sometimes far fewer random numbers are required to obtain accurate results.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا