Do you want to publish a course? Click here

Multilevel approximation of Gaussian random fields: Covariance compression, estimation and spatial prediction

328   0   0.0 ( 0 )
 Added by Kristin Kirchner
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Centered Gaussian random fields (GRFs) indexed by compacta such as smooth, bounded Euclidean domains or smooth, compact and orientable manifolds are determined by their covariance operators. We consider centered GRFs given as variational solutions to coloring operator equations driven by spatial white noise, with an elliptic self-adjoint pseudodifferential coloring operator from the Hormander class. This includes the Matern class of GRFs as a special case. Using biorthogonal multiresolution analyses on the manifold, we prove that the precision and covariance operators, respectively, may be identified with bi-infinite matrices and finite sections may be diagonally preconditioned rendering the condition number independent of the dimension $p$ of this section. We prove that a tapering strategy by thresholding applied on finite sections of the bi-infinite precision and covariance matrices results in optimally numerically sparse approximations. That is, asymptotically only linearly many nonzero matrix entries are sufficient to approximate the original section of the bi-infinite covariance or precision matrix using this tapering strategy to arbitrary precision. The locations of these nonzero matrix entries are known a priori. The tapered covariance or precision matrices may also be optimally diagonally preconditioned. Analysis of the relative size of the entries of the tapered covariance matrices motivates novel, multilevel Monte Carlo (MLMC) oracles for covariance estimation, in sample complexity that scales log-linearly with respect to the number $p$ of parameters. In addition, we propose and analyze a novel compressive algorithm for simulating and kriging of GRFs. The complexity (work and memory vs. accuracy) of these three algorithms scales near-optimally in terms of the number of parameters $p$ of the sample-wise approximation of the GRF in Sobolev scales.



rate research

Read More

Series expansions of isotropic Gaussian random fields on $mathbb{S}^2$ with independent Gaussian coefficients and localized basis functions are constructed. Such representations provide an alternative to the standard Karhunen-Lo`eve expansions of isotropic random fields in terms of spherical harmonics. Their multilevel localized structure of basis functions is especially useful in adaptive algorithms. The basis functions are obtained by applying the square root of the covariance operator to spherical needlets. Localization of the resulting covariance-dependent multilevel basis is shown under decay conditions on the angular power spectrum of the random field. In addition, numerical illustrations are given and an application to random elliptic PDEs on the sphere is analyzed.
Although the operator (spectral) norm is one of the most widely used metrics for covariance estimation, comparatively little is known about the fluctuations of error in this norm. To be specific, let $hatSigma$ denote the sample covariance matrix of $n$ observations in $mathbb{R}^p$ that arise from a population matrix $Sigma$, and let $T_n=sqrt{n}|hatSigma-Sigma|_{text{op}}$. In the setting where the eigenvalues of $Sigma$ have a decay profile of the form $lambda_j(Sigma)asymp j^{-2beta}$, we analyze how well the bootstrap can approximate the distribution of $T_n$. Our main result shows that up to factors of $log(n)$, the bootstrap can approximate the distribution of $T_n$ at the dimension-free rate of $n^{-frac{beta-1/2}{6beta+4}}$, with respect to the Kolmogorov metric. Perhaps surprisingly, a result of this type appears to be new even in settings where $p< n$. More generally, we discuss the consequences of this result beyond covariance matrices and show how the bootstrap can be used to estimate the errors of sketching algorithms in randomized numerical linear algebra (RandNLA). An illustration of these ideas is also provided with a climate data example.
The asymptotic variance of the maximum likelihood estimate is proved to decrease when the maximization is restricted to a subspace that contains the true parameter value. Maximum likelihood estimation allows a systematic fitting of covariance models to the sample, which is important in data assimilation. The hierarchical maximum likelihood approach is applied to the spectral diagonal covariance model with different parameterizations of eigenvalue decay, and to the sparse inverse covariance model with specified parameter values on different sets of nonzero entries. It is shown computationally that using smaller sets of parameters can decrease the sampling noise in high dimension substantially.
In this talk I describe MAGIC, an efficient approach to covariance estimation and signal reconstruction for Gaussian random fields (MAGIC Allows Global Inference of Covariance). It solves a long-standing problem in the field of cosmic microwave background (CMB) data analysis but is in fact a general technique that can be applied to noisy, contaminated and incomplete or censored measurements of either spatial or temporal Gaussian random fields. In this talk I will phrase the method in a way that emphasizes its general structure and applicability but I comment on applications in the CMB context. The method allows the exploration of the full non-Gaussian joint posterior density of the signal and parameters in the covariance matrix (such as the power spectrum) given the data. It generalizes the familiar Wiener filter in that it automatically discovers signal correlations in the data as long as a noise model is specified and priors encode what is known about potential contaminants. The key methodological difference is that instead of attempting to evaluate the likelihood (or posterior density) or its derivatives, this method generates an asymptotically exact Monte Carlo sample from it. I present example applications to power spectrum estimation and signal reconstruction from measurements of the CMB. For these applications the method achieves speed-ups of many orders of magnitude compared to likelihood maximization techniques, while offering greater flexibility in modeling and a full characterization of the uncertainty in the estimates.
Gaussian process regression has proven very powerful in statistics, machine learning and inverse problems. A crucial aspect of the success of this methodology, in a wide range of applications to complex and real-world problems, is hierarchical modeling and learning of hyperparameters. The purpose of this paper is to study two paradigms of learning hierarchical parameters: one is from the probabilistic Bayesian perspective, in particular, the empirical Bayes approach that has been largely used in Bayesian statistics; the other is from the deterministic and approximation theoretic view, and in particular the kernel flow algorithm that was proposed recently in the machine learning literature. Analysis of their consistency in the large data limit, as well as explicit identification of their implicit bias in parameter learning, are established in this paper for a Matern-like model on the torus. A particular technical challenge we overcome is the learning of the regularity parameter in the Matern-like field, for which consistency results have been very scarce in the spatial statistics literature. Moreover, we conduct extensive numerical experiments beyond the Matern-like model, comparing the two algorithms further. These experiments demonstrate learning of other hierarchical parameters, such as amplitude and lengthscale; they also illustrate the setting of model misspecification in which the kernel flow approach could show superior performance to the more traditional empirical Bayes approach.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا