ﻻ يوجد ملخص باللغة العربية
We show that a very simple modification of the Pure Greedy Algorithm for approximating functions by sparse sums from a dictionary in a Hilbert or more generally a Banach space has optimal convergence rates on the class of convex combinations of dictionary elements
We suggest a new greedy strategy for convex optimization in Banach spaces and prove its convergent rates under a suitable behavior of the modulus of uniform smoothness of the objective function.
This paper studies the problem of approximating a function $f$ in a Banach space $X$ from measurements $l_j(f)$, $j=1,dots,m$, where the $l_j$ are linear functionals from $X^*$. Most results study this problem for classical Banach spaces $X$ such as
Recently, neural networks have been widely applied for solving partial differential equations. However, the resulting optimization problem brings many challenges for current training algorithms. This manifests itself in the fact that the convergence
We prove thatthe Banach space $(oplus_{n=1}^infty ell_p^n)_{ell_q}$, which is isomorphic to certain Besov spaces, has a greedy basis whenever $1leq p leqinfty$ and $1<q<infty$. Furthermore, the Banach spaces $(oplus_{n=1}^infty ell_p^n)_{ell_1}$, wit
The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm fo