ترغب بنشر مسار تعليمي؟ اضغط هنا

Rate of Convergence and Tractability of the Radial Function Approximation Problem

125   0   0.0 ( 0 )
 نشر من قبل Fred J. Hickernell
 تاريخ النشر 2010
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This article studies the problem of approximating functions belonging to a Hilbert space $H_d$ with an isotropic or anisotropic Gaussian reproducing kernel, $$ K_d(bx,bt) = expleft(-sum_{ell=1}^dgamma_ell^2(x_ell-t_ell)^2right) mbox{for all} bx,btinreals^d. $$ The isotropic case corresponds to using the same shape parameters for all coordinates, namely $gamma_ell=gamma>0$ for all $ell$, whereas the anisotropic case corresponds to varying shape parameters $gamma_ell$. We are especially interested in moderate to large $d$.



قيم البحث

اقرأ أيضاً

We consider the geometry relaxation of an isolated point defect embedded in a homogeneous crystalline solid, within an atomistic description. We prove a sharp convergence rate for a periodic supercell approximation with respect to uniform convergence of the discrete strains.
We consider approximation problems for a special space of d variate functions. We show that the problems have small number of active variables, as it has been postulated in the past using concentration of measure arguments. We also show that, dependi ng on the norm for measuring the error, the problems are strongly polynomially or quasi-polynomially tractable even in the model of computation where functional evaluations have the cost exponential in the number of active variables.
92 - Kosuke Suzuki 2015
We investigate multivariate integration for a space of infinitely times differentiable functions $mathcal{F}_{s, boldsymbol{u}} := {f in C^infty [0,1]^s mid | f |_{mathcal{F}_{s, boldsymbol{u}}} < infty }$, where $| f |_{mathcal{F}_{s, boldsymbol{u}} } := sup_{boldsymbol{alpha} = (alpha_1, dots, alpha_s) in mathbb{N}_0^s} |f^{(boldsymbol{alpha})}|_{L^1}/prod_{j=1}^s u_j^{alpha_j}$, $f^{(boldsymbol{alpha})} := frac{partial^{|boldsymbol{alpha}|}}{partial x_1^{alpha_1} cdots partial x_s^{alpha_s}}f$ and $boldsymbol{u} = {u_j}_{j geq 1}$ is a sequence of positive decreasing weights. Let $e(n,s)$ be the minimal worst-case error of all algorithms that use $n$ function values in the $s$-variate case. We prove that for any $boldsymbol{u}$ and $s$ considered $e(n,s) leq C(s) exp(-c(s)(log{n})^2)$ holds for all $n$, where $C(s)$ and $c(s)$ are constants which may depend on $s$. Further we show that if the weights $boldsymbol{u}$ decay sufficiently fast then there exist some $1 < p < 2$ and absolute constants $C$ and $c$ such that $e(n,s) leq C exp(-c(log{n})^p)$ holds for all $s$ and $n$. These bounds are attained by quasi-Monte Carlo integration using digital nets. These convergence and tractability results come from those for the Walsh space into which $mathcal{F}_{s, boldsymbol{u}}$ is embedded.
Numerical causal derivative estimators from noisy data are essential for real time applications especially for control applications or fluid simulation so as to address the new paradigms in solid modeling and video compression. By using an analytical point of view due to Lanczos cite{C. Lanczos} to this causal case, we revisit $n^{th}$ order derivative estimators originally introduced within an algebraic framework by Mboup, Fliess and Join in cite{num,num0}. Thanks to a given noise level $delta$ and a well-suitable integration length window, we show that the derivative estimator error can be $mathcal{O}(delta ^{frac{q+1}{n+1+q}})$ where $q$ is the order of truncation of the Jacobi polynomial series expansion used. This so obtained bound helps us to choose the values of our parameter estimators. We show the efficiency of our method on some examples.
The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this paper we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that Gaver-Stehfest approximations conver ge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا