ترغب بنشر مسار تعليمي؟ اضغط هنا

Equivalence of measures and asymptotically optimal linear prediction for Gaussian random fields with fractional-order covariance operators

178   0   0.0 ( 0 )
 نشر من قبل Kristin Kirchner
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider Gaussian measures $mu, tilde{mu}$ on a separable Hilbert space, with fractional-order covariance operators $A^{-2beta}$ resp. $tilde{A}^{-2tilde{beta}}$, and derive necessary and sufficient conditions on $A, tilde{A}$ and $beta, tilde{beta} > 0$ for I. equivalence of the measures $mu$ and $tilde{mu}$, and II. uniform asymptotic optimality of linear predictions for $mu$ based on the misspecified measure $tilde{mu}$. These results hold, e.g., for Gaussian processes on compact metric spaces. As an important special case, we consider the class of generalized Whittle-Matern Gaussian random fields, where $A$ and $tilde{A}$ are elliptic second-order differential operators, formulated on a bounded Euclidean domain $mathcal{D}subsetmathbb{R}^d$ and augmented with homogeneous Dirichlet boundary conditions. Our outcomes explain why the predictive performances of stationary and non-stationary models in spatial statistics often are comparable, and provide a crucial first step in deriving consistency results for parameter estimation of generalized Whittle-Matern fields.

قيم البحث

اقرأ أيضاً

Optimal linear prediction (also known as kriging) of a random field ${Z(x)}_{xinmathcal{X}}$ indexed by a compact metric space $(mathcal{X},d_{mathcal{X}})$ can be obtained if the mean value function $mcolonmathcal{X}tomathbb{R}$ and the covariance f unction $varrhocolonmathcal{X}timesmathcal{X}tomathbb{R}$ of $Z$ are known. We consider the problem of predicting the value of $Z(x^*)$ at some location $x^*inmathcal{X}$ based on observations at locations ${x_j}_{j=1}^n$ which accumulate at $x^*$ as $ntoinfty$ (or, more generally, predicting $varphi(Z)$ based on ${varphi_j(Z)}_{j=1}^n$ for linear functionals $varphi, varphi_1, ldots, varphi_n$). Our main result characterizes the asymptotic performance of linear predictors (as $n$ increases) based on an incorrect second order structure $(tilde{m},tilde{varrho})$, without any restrictive assumptions on $varrho, tilde{varrho}$ such as stationarity. We, for the first time, provide necessary and sufficient conditions on $(tilde{m},tilde{varrho})$ for asymptotic optimality of the corresponding linear predictor holding uniformly with respect to $varphi$. These general results are illustrated by weakly stationary random fields on $mathcal{X}subsetmathbb{R}^d$ with Matern or periodic covariance functions, and on the sphere $mathcal{X}=mathbb{S}^2$ for the case of two isotropic covariance functions.
The assumption of separability of the covariance operator for a random image or hypersurface can be of substantial use in applications, especially in situations where the accurate estimation of the full covariance structure is unfeasible, either for computational reasons, or due to a small sample size. However, inferential tools to verify this assumption are somewhat lacking in high-dimensional or functional {data analysis} settings, where this assumption is most relevant. We propose here to test separability by focusing on $K$-dimensional projections of the difference between the covariance operator and a nonparametric separable approximation. The subspace we project onto is one generated by the eigenfunctions of the covariance operator estimated under the separability hypothesis, negating the need to ever estimate the full non-separable covariance. We show that the rescaled difference of the sample covariance operator with its separable approximation is asymptotically Gaussian. As a by-product of this result, we derive asymptotically pivotal tests under Gaussian assumptions, and propose bootstrap methods for approximating the distribution of the test statistics. We probe the finite sample performance through simulations studies, and present an application to log-spectrogram images from a phonetic linguistics dataset.
We show that the problem of finding the measure supported on a compact subset K of the complex plane such that the variance of the least squares predictor by polynomials of degree at most n at a point exterior to K is a minimum, is equivalent to the problem of finding the polynomial of degree at most n, bounded by 1 on K with extremal growth at this external point. We use this to find the polynomials of extremal growth for the interval [-1,1] at a purely imaginary point. The related problem on the extremal growth of real polynomials was studied by ErdH{o}s in 1947.
161 - Patrick Cattiaux 2021
We study functional inequalities (Poincare, Cheeger, log-Sobolev) for probability measures obtained as perturbations. Several explicit results for general measures as well as log-concave distributions are given.The initial goal of this work was to ob tain explicit bounds on the constants in view of statistical applications for instance. These results are then applied to the Langevin Monte-Carlo method used in statistics in order to compute Bayesian estimators.
We investigate the sub-Gaussian property for almost surely bounded random variables. If sub-Gaussianity per se is de facto ensured by the bounded support of said random variables, then exciting research avenues remain open. Among these questions is h ow to characterize the optimal sub-Gaussian proxy variance? Another question is how to characterize strict sub-Gaussianity, defined by a proxy variance equal to the (standard) variance? We address the questions in proposing conditions based on the study of functions variations. A particular focus is given to the relationship between strict sub-Gaussianity and symmetry of the distribution. In particular, we demonstrate that symmetry is neither sufficient nor necessary for strict sub-Gaussianity. In contrast, simple necessary conditions on the one hand, and simple sufficient conditions on the other hand, for strict sub-Gaussianity are provided. These results are illustrated via various applications to a number of bounded random variables, including Bernoulli, beta, binomial, uniform, Kumaraswamy, and triangular distributions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا