Do you want to publish a course? Click here

Regularity and convergence analysis in Sobolev and Holder spaces for generalized Whittle-Matern fields

95   0   0.0 ( 0 )
 Added by Kristin Kirchner
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We analyze several Galerkin approximations of a Gaussian random field $mathcal{Z}colonmathcal{D}timesOmegatomathbb{R}$ indexed by a Euclidean domain $mathcal{D}subsetmathbb{R}^d$ whose covariance structure is determined by a negative fractional power $L^{-2beta}$ of a second-order elliptic differential operator $L:= - ablacdot(A abla) + kappa^2$. Under minimal assumptions on the domain $mathcal{D}$, the coefficients $Acolonmathcal{D}tomathbb{R}^{dtimes d}$, $kappacolonmathcal{D}tomathbb{R}$, and the fractional exponent $beta>0$, we prove convergence in $L_q(Omega; H^sigma(mathcal{D}))$ and in $L_q(Omega; C^delta(overline{mathcal{D}}))$ at (essentially) optimal rates for (i) spectral Galerkin methods and (ii) finite element approximations. Specifically, our analysis is solely based on $H^{1+alpha}(mathcal{D})$-regularity of the differential operator $L$, where $0<alphaleq 1$. For this setting, we furthermore provide rigorous estimates for the error in the covariance function of these approximations in $L_{infty}(mathcal{D}timesmathcal{D})$ and in the mixed Sobolev space $H^{sigma,sigma}(mathcal{D}timesmathcal{D})$, showing convergence which is more than twice as fast compared to the corresponding $L_q(Omega; H^sigma(mathcal{D}))$-rate. For the well-known example of such Gaussian random fields, the original Whittle-Matern class, where $L=-Delta + kappa^2$ and $kappa equiv operatorname{const.}$, we perform several numerical experiments which validate our theoretical results.



rate research

Read More

126 - Yonghui Ling , Juan Liang 2018
In the present paper, we consider the semilocal convergence problems of the two-step Newton method for solving nonlinear operator equation in Banach spaces. Under the assumption that the first derivative of the operator satisfies a generalized Lipschitz condition, a new semilocal convergence analysis for the two-step Newton method is presented. The Q-cubic convergence is obtained by an additional condition. This analysis also allows us to obtain three important spacial cases about the convergence results based on the premises of Kantorovich, Smale and Nesterov-Nemirovskii types. An application of our convergence results is to the approximation of minimal positive solution for a nonsymmetric algebraic Riccati equation arising from transport theory.
The periodization of a stationary Gaussian random field on a sufficiently large torus comprising the spatial domain of interest is the basis of various efficient computational methods, such as the classical circulant embedding technique using the fast Fourier transform for generating samples on uniform grids. For the family of Matern covariances with smoothness index $ u$ and correlation length $lambda$, we analyse the nonsmooth periodization (corresponding to classical circulant embedding) and an alternative procedure using a smooth truncation of the covariance function. We solve two open problems: the first concerning the $ u$-dependent asymptotic decay of eigenvalues of the resulting circulant in the nonsmooth case, the second concerning the required size in terms of $ u$, $lambda$ of the torus when using a smooth periodization. In doing this we arrive at a complete characterisation of the performance of these two approaches. Both our theoretical estimates and the numerical tests provided here show substantial advantages of smooth truncation.
161 - Sean Hon , Haizhao Yang 2021
We establish in this work approximation results of deep neural networks for smooth functions measured in Sobolev norms, motivated by recent development of numerical solvers for partial differential equations using deep neural networks. The error bounds are explicitly characterized in terms of both the width and depth of the networks simultaneously. Namely, for $fin C^s([0,1]^d)$, we show that deep ReLU networks of width $mathcal{O}(Nlog{N})$ and of depth $mathcal{O}(Llog{L})$ can achieve a non-asymptotic approximation rate of $mathcal{O}(N^{-2(s-1)/d}L^{-2(s-1)/d})$ with respect to the $mathcal{W}^{1,p}([0,1]^d)$ norm for $pin[1,infty)$. If either the ReLU function or its square is applied as activation functions to construct deep neural networks of width $mathcal{O}(Nlog{N})$ and of depth $mathcal{O}(Llog{L})$ to approximate $fin C^s([0,1]^d)$, the non-asymptotic approximation rate is $mathcal{O}(N^{-2(s-n)/d}L^{-2(s-n)/d})$ with respect to the $mathcal{W}^{n,p}([0,1]^d)$ norm for $pin[1,infty)$.
In this paper we introduce a generalized Sobolev space by defining a semi-inner product formulated in terms of a vector distributional operator $mathbf{P}$ consisting of finitely or countably many distributional operators $P_n$, which are defined on the dual space of the Schwartz space. The types of operators we consider include not only differential operators, but also more general distributional operators such as pseudo-differential operators. We deduce that a certain appropriate full-space Green function $G$ with respect to $L:=mathbf{P}^{ast T}mathbf{P}$ now becomes a conditionally positive definite function. In order to support this claim we ensure that the distributional adjoint operator $mathbf{P}^{ast}$ of $mathbf{P}$ is well-defined in the distributional sense. Under sufficient conditions, the native space (reproducing-kernel Hilbert space) associated with the Green function $G$ can be isometrically embedded into or even be isometrically equivalent to a generalized Sobolev space. As an application, we take linear combinations of translates of the Green function with possibly added polynomial terms and construct a multivariate minimum-norm interpolant $s_{f,X}$ to data values sampled from an unknown generalized Sobolev function $f$ at data sites located in some set $X subset mathbb{R}^d$. We provide several examples, such as Matern kernels or Gaussian kernels, that illustrate how many reproducing-kernel Hilbert spaces of well-known reproducing kernels are isometrically equivalent to a generalized Sobolev space. These examples further illustrate how we can rescale the Sobolev spaces by the vector distributional operator $mathbf{P}$. Introducing the notion of scale as part of the definition of a generalized Sobolev space may help us to choose the best kernel function for kernel-based approximation methods.
We give an elementary proof of a compact embedding theorem in abstract Sobolev spaces. The result is first presented in a general context and later specialized to the case of degenerate Sobolev spaces defined with respect to nonnegative quadratic forms. Although our primary interest concerns degenerate quadratic forms, our result also applies to nondegener- ate cases, and we consider several such applications, including the classical Rellich-Kondrachov compact embedding theorem and results for the class of s-John domains, the latter for weights equal to powers of the distance to the boundary. We also derive a compactness result for Lebesgue spaces on quasimetric spaces unrelated to Euclidean space and possibly without any notion of gradient.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا