ترغب بنشر مسار تعليمي؟ اضغط هنا

Rescaled Pure Greedy Algorithm for Hilbert and Banach Spaces

125   0   0.0 ( 0 )
 نشر من قبل Guergana Petrova
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English
 تأليف Guergana Petrova




اسأل ChatGPT حول البحث

We show that a very simple modification of the Pure Greedy Algorithm for approximating functions by sparse sums from a dictionary in a Hilbert or more generally a Banach space has optimal convergence rates on the class of convex combinations of dictionary elements

قيم البحث

اقرأ أيضاً

We suggest a new greedy strategy for convex optimization in Banach spaces and prove its convergent rates under a suitable behavior of the modulus of uniform smoothness of the objective function.
This paper studies the problem of approximating a function $f$ in a Banach space $X$ from measurements $l_j(f)$, $j=1,dots,m$, where the $l_j$ are linear functionals from $X^*$. Most results study this problem for classical Banach spaces $X$ such as the $L_p$ spaces, $1le ple infty$, and for $K$ the unit ball of a smoothness space in $X$. Our interest in this paper is in the model classes $K=K(epsilon,V)$, with $epsilon>0$ and $V$ a finite dimensional subspace of $X$, which consists of all $fin X$ such that $dist(f,V)_Xle epsilon$. These model classes, called {it approximation sets}, arise naturally in application domains such as parametric partial differential equations, uncertainty quantification, and signal processing. A general theory for the recovery of approximation sets in a Banach space is given. This theory includes tight a priori bounds on optimal performance, and algorithms for finding near optimal approximations. We show how the recovery problem for approximation sets is connected with well-studied concepts in Banach space theory such as liftings and the angle between spaces. Examples are given that show how this theory can be used to recover several recent results on sampling and data assimilation.
Recently, neural networks have been widely applied for solving partial differential equations. However, the resulting optimization problem brings many challenges for current training algorithms. This manifests itself in the fact that the convergence order that has been proven theoretically cannot be obtained numerically. In this paper, we develop a novel greedy training algorithm for solving PDEs which builds the neural network architecture adaptively. It is the first training algorithm that observes the convergence order of neural networks numerically. This innovative algorithm is tested on several benchmark examples in both 1D and 2D to confirm its efficiency and robustness.
We prove thatthe Banach space $(oplus_{n=1}^infty ell_p^n)_{ell_q}$, which is isomorphic to certain Besov spaces, has a greedy basis whenever $1leq p leqinfty$ and $1<q<infty$. Furthermore, the Banach spaces $(oplus_{n=1}^infty ell_p^n)_{ell_1}$, wit h $1<ple infty$, and $(oplus_{n=1}^infty ell_p^n)_{c_0}$, with $1le p<infty$ do not have a greedy bases. We prove as well that the space $(oplus_{n=1}^infty ell_p^n)_{ell_q}$ has a 1-greedy basis if and only if $1leq p=qle infty$.
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm fo r efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form $exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا