ترغب بنشر مسار تعليمي؟ اضغط هنا

On Tractability of Approximation for a Special Space of Functions

59   0   0.0 ( 0 )
 نشر من قبل Markus Hegland
 تاريخ النشر 2012
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider approximation problems for a special space of d variate functions. We show that the problems have small number of active variables, as it has been postulated in the past using concentration of measure arguments. We also show that, depending on the norm for measuring the error, the problems are strongly polynomially or quasi-polynomially tractable even in the model of computation where functional evaluations have the cost exponential in the number of active variables.

قيم البحث

اقرأ أيضاً

This article studies the problem of approximating functions belonging to a Hilbert space $H_d$ with an isotropic or anisotropic Gaussian reproducing kernel, $$ K_d(bx,bt) = expleft(-sum_{ell=1}^dgamma_ell^2(x_ell-t_ell)^2right) mbox{for all} bx,b tinreals^d. $$ The isotropic case corresponds to using the same shape parameters for all coordinates, namely $gamma_ell=gamma>0$ for all $ell$, whereas the anisotropic case corresponds to varying shape parameters $gamma_ell$. We are especially interested in moderate to large $d$.
57 - Kosuke Suzuki 2015
We investigate multivariate integration for a space of infinitely times differentiable functions $mathcal{F}_{s, boldsymbol{u}} := {f in C^infty [0,1]^s mid | f |_{mathcal{F}_{s, boldsymbol{u}}} < infty }$, where $| f |_{mathcal{F}_{s, boldsymbol{u}} } := sup_{boldsymbol{alpha} = (alpha_1, dots, alpha_s) in mathbb{N}_0^s} |f^{(boldsymbol{alpha})}|_{L^1}/prod_{j=1}^s u_j^{alpha_j}$, $f^{(boldsymbol{alpha})} := frac{partial^{|boldsymbol{alpha}|}}{partial x_1^{alpha_1} cdots partial x_s^{alpha_s}}f$ and $boldsymbol{u} = {u_j}_{j geq 1}$ is a sequence of positive decreasing weights. Let $e(n,s)$ be the minimal worst-case error of all algorithms that use $n$ function values in the $s$-variate case. We prove that for any $boldsymbol{u}$ and $s$ considered $e(n,s) leq C(s) exp(-c(s)(log{n})^2)$ holds for all $n$, where $C(s)$ and $c(s)$ are constants which may depend on $s$. Further we show that if the weights $boldsymbol{u}$ decay sufficiently fast then there exist some $1 < p < 2$ and absolute constants $C$ and $c$ such that $e(n,s) leq C exp(-c(log{n})^p)$ holds for all $s$ and $n$. These bounds are attained by quasi-Monte Carlo integration using digital nets. These convergence and tractability results come from those for the Walsh space into which $mathcal{F}_{s, boldsymbol{u}}$ is embedded.
Most commonly used emph{adaptive} algorithms for univariate real-valued function approximation and global minimization lack theoretical guarantees. Our new locally adaptive algorithms are guaranteed to provide answers that satisfy a user-specified ab solute error tolerance for a cone, $mathcal{C}$, of non-spiky input functions in the Sobolev space $W^{2,infty}[a,b]$. Our algorithms automatically determine where to sample the function---sampling more densely where the second derivative is larger. The computational cost of our algorithm for approximating a univariate function $f$ on a bounded interval with $L^{infty}$-error no greater than $varepsilon$ is $mathcal{O}Bigl(sqrt{{left|fright|}_{frac12}/varepsilon}Bigr)$ as $varepsilon to 0$. This is the same order as that of the best function approximation algorithm for functions in $mathcal{C}$. The computational cost of our global minimization algorithm is of the same order and the cost can be substantially less if $f$ significantly exceeds its minimum over much of the domain. Our Guaranteed Automatic Integration Library (GAIL) contains these new algorithms. We provide numerical experiments to illustrate their superior performance.
We propose an optimal approximation formula for analytic functions that are defined on a complex region containing the real interval $(-1,1)$ and possibly have algebraic singularities at the endpoints of the interval. As a space of such functions,we consider a Hardy space with the weight given by $w_{mu}(z) = (1-z^{2})^{mu/2}$ for $mu > 0$, and formulate the optimality of an approximation formula for the functions in the space. Then, we propose an optimal approximation formula for the space for any $mu > 0$ as opposed to existing results with the restriction $0 < mu < mu_{ast}$ for a certain constant $mu_{ast}$. We also provide the results of numerical experiments to show the performance of the proposed formula.
124 - Jiequn Han , Yingzhou Li , Lin Lin 2019
We consider universal approximations of symmetric and anti-symmetric functions, which are important for applications in quantum physics, as well as other scientific and engineering computations. We give constructive approximations with explicit bound s on the number of parameters with respect to the dimension and the target accuracy $epsilon$. While the approximation still suffers from curse of dimensionality, to the best of our knowledge, these are first results in the literature with explicit error bounds. Moreover, we also discuss neural network architecture that can be suitable for approximating symmetric and anti-symmetric functions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا