ﻻ يوجد ملخص باللغة العربية
While it is well known that nonlinear methods of approximation can often perform dramatically better than linear methods, there are still questions on how to measure the optimal performance possible for such methods. This paper studies nonlinear methods of approximation that are compatible with numerical implementation in that they are required to be numerically stable. A measure of optimal performance, called {em stable manifold widths}, for approximating a model class $K$ in a Banach space $X$ by stable manifold methods is introduced. Fundamental inequalities between these stable manifold widths and the entropy of $K$ are established. The effects of requiring stability in the settings of deep learning and compressed sensing are discussed.
Given a function $uin L^2=L^2(D,mu)$, where $Dsubset mathbb R^d$ and $mu$ is a measure on $D$, and a linear subspace $V_nsubset L^2$ of dimension $n$, we show that near-best approximation of $u$ in $V_n$ can be computed from a near-optimal budget of
This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polyno
We study a class of nonlinear eigenvalue problems of Scrodinger type, where the potential is singular on a set of points. Such problems are widely present in physics and chemistry, and their analysis is of both theoretical and practical interest. In
We demonstrate that the recently developed Optimal Uncertainty Quantification (OUQ) theory, combined with recent software enabling fast global solutions of constrained non-convex optimization problems, provides a methodology for rigorous model certif
Neural Networks (NNs) are the method of choice for building learning algorithms. Their popularity stems from their empirical success on several challenging learning problems. However, most scholars agree that a convincing theoretical explanation for