Do you want to publish a course? Click here

Local Adaption for Approximation and Minimization of Univariate Functions

63   0   0.0 ( 0 )
 Added by Yuhan Ding
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

Most commonly used emph{adaptive} algorithms for univariate real-valued function approximation and global minimization lack theoretical guarantees. Our new locally adaptive algorithms are guaranteed to provide answers that satisfy a user-specified absolute error tolerance for a cone, $mathcal{C}$, of non-spiky input functions in the Sobolev space $W^{2,infty}[a,b]$. Our algorithms automatically determine where to sample the function---sampling more densely where the second derivative is larger. The computational cost of our algorithm for approximating a univariate function $f$ on a bounded interval with $L^{infty}$-error no greater than $varepsilon$ is $mathcal{O}Bigl(sqrt{{left|fright|}_{frac12}/varepsilon}Bigr)$ as $varepsilon to 0$. This is the same order as that of the best function approximation algorithm for functions in $mathcal{C}$. The computational cost of our global minimization algorithm is of the same order and the cost can be substantially less if $f$ significantly exceeds its minimum over much of the domain. Our Guaranteed Automatic Integration Library (GAIL) contains these new algorithms. We provide numerical experiments to illustrate their superior performance.



rate research

Read More

124 - Jiequn Han , Yingzhou Li , Lin Lin 2019
We consider universal approximations of symmetric and anti-symmetric functions, which are important for applications in quantum physics, as well as other scientific and engineering computations. We give constructive approximations with explicit bounds on the number of parameters with respect to the dimension and the target accuracy $epsilon$. While the approximation still suffers from curse of dimensionality, to the best of our knowledge, these are first results in the literature with explicit error bounds. Moreover, we also discuss neural network architecture that can be suitable for approximating symmetric and anti-symmetric functions.
We consider approximation problems for a special space of d variate functions. We show that the problems have small number of active variables, as it has been postulated in the past using concentration of measure arguments. We also show that, depending on the norm for measuring the error, the problems are strongly polynomially or quasi-polynomially tractable even in the model of computation where functional evaluations have the cost exponential in the number of active variables.
We propose an optimal approximation formula for analytic functions that are defined on a complex region containing the real interval $(-1,1)$ and possibly have algebraic singularities at the endpoints of the interval. As a space of such functions,we consider a Hardy space with the weight given by $w_{mu}(z) = (1-z^{2})^{mu/2}$ for $mu > 0$, and formulate the optimality of an approximation formula for the functions in the space. Then, we propose an optimal approximation formula for the space for any $mu > 0$ as opposed to existing results with the restriction $0 < mu < mu_{ast}$ for a certain constant $mu_{ast}$. We also provide the results of numerical experiments to show the performance of the proposed formula.
In this paper we consider the approximation of functions by radial basis function interpolants. There is a plethora of results about the asymptotic behaviour of the error between appropriately smooth functions and their interpolants, as the interpolation points fill out a bounded domain in R^d. In all of these cases, the analysis takes place in a natural function space dictated by the choice of radial basis function - the native space. In many cases, the native space contains functions possessing a certain amount of smoothness. We address the question of what can be said about these error estimates when the function being interpolated fails to have the required smoothness. These are the rough functions of the title. We limit our discussion to surface splines, as an exemplar of a wider class of radial basis functions, because we feel our techniques are most easily seen and understood in this setting.
144 - V.N. Temlyakov 2015
The paper gives a constructive method, based on greedy algorithms, that provides for the classes of functions with small mixed smoothness the best possible in the sense of order approximation error for the $m$-term approximation with respect to the trigonometric system.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا