ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Stable Nonlinear Approximation

78   0   0.0 ( 0 )
 نشر من قبل Guergana Petrova
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

While it is well known that nonlinear methods of approximation can often perform dramatically better than linear methods, there are still questions on how to measure the optimal performance possible for such methods. This paper studies nonlinear methods of approximation that are compatible with numerical implementation in that they are required to be numerically stable. A measure of optimal performance, called {em stable manifold widths}, for approximating a model class $K$ in a Banach space $X$ by stable manifold methods is introduced. Fundamental inequalities between these stable manifold widths and the entropy of $K$ are established. The effects of requiring stability in the settings of deep learning and compressed sensing are discussed.



قيم البحث

اقرأ أيضاً

Given a function $uin L^2=L^2(D,mu)$, where $Dsubset mathbb R^d$ and $mu$ is a measure on $D$, and a linear subspace $V_nsubset L^2$ of dimension $n$, we show that near-best approximation of $u$ in $V_n$ can be computed from a near-optimal budget of $Cn$ pointwise evaluations of $u$, with $C>1$ a universal constant. The sampling points are drawn according to some random distribution, the approximation is computed by a weighted least-squares method, and the error is assessed in expected $L^2$ norm. This result improves on the results in [6,8] which require a sampling budget that is sub-optimal by a logarithmic factor, thanks to a sparsification strategy introduced in [17,18]. As a consequence, we obtain for any compact class $mathcal Ksubset L^2$ that the sampling number $rho_{Cn}^{rm rand}(mathcal K)_{L^2}$ in the randomized setting is dominated by the Kolmogorov $n$-width $d_n(mathcal K)_{L^2}$. While our result shows the existence of a randomized sampling with such near-optimal properties, we discuss remaining issues concerning its generation by a computationally efficient algorithm.
This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polyno mial expansion in the scalar Gaussian random variables that parametrize $b$. We present a general convergence analysis of weighted least-squares approximants for smooth and arbitrarily rough random field, using a suitable random design, for which we prove optimality in the following sense: their convergence rate matches exactly or closely the rate that has been established in cite{BCDM} for best $n$-term approximation by Hermite polynomials, under the same minimial assumptions on the Gaussian random field. This is in contrast with the current state of the art results for the stochastic Galerkin method that suffers the lack of coercivity due to the lognormal nature of the diffusion field. Numerical tests with $b$ as the Brownian bridge confirm our theoretical findings.
236 - Yvon Maday , Carlo Marcati 2019
We study a class of nonlinear eigenvalue problems of Scrodinger type, where the potential is singular on a set of points. Such problems are widely present in physics and chemistry, and their analysis is of both theoretical and practical interest. In particular, we study the regularity of the eigenfunctions of the operators considered, and we propose and analyze the approximation of the solution via an isotropically refined hp discontinuous Galerkin (dG) method. We show that, for weighted analytic potentials and for up-to-quartic nonlinearities, the eigenfunctions belong to analytic-type non homogeneous weighted Sobolev spaces. We also prove quasi optimal a priori estimates on the error of the dG finite element method; when using an isotropically refined hp space the numerical solution is shown to converge with exponential rate towards the exact eigenfunction. In addition, we investigate the role of pointwise convergence in the doubling of the convergence rate for the eigenvalues with respect to the convergence rate of eigenfunctions. We conclude with a series of numerical tests to validate the theoretical results.
58 - M. McKerns 2020
We demonstrate that the recently developed Optimal Uncertainty Quantification (OUQ) theory, combined with recent software enabling fast global solutions of constrained non-convex optimization problems, provides a methodology for rigorous model certif ication, validation, and optimal design under uncertainty. In particular, we show the utility of the OUQ approach to understanding the behavior of a system that is governed by a partial differential equation -- Burgers equation. We solve the problem of predicting shock location when we only know bounds on viscosity and on the initial conditions. Through this example, we demonstrate the potential to apply OUQ to complex physical systems, such as systems governed by coupled partial differential equations. We compare our results to those obtained using a standard Monte Carlo approach, and show that OUQ provides more accurate bounds at a lower computational cost. We discuss briefly about how to extend this approach to more complex systems, and how to integrate our approach into a more ambitious program of optimal experimental design.
Neural Networks (NNs) are the method of choice for building learning algorithms. Their popularity stems from their empirical success on several challenging learning problems. However, most scholars agree that a convincing theoretical explanation for this success is still lacking. This article surveys the known approximation properties of the outputs of NNs with the aim of uncovering the properties that are not present in the more traditional methods of approximation used in numerical analysis. Comparisons are made with traditional approximation methods from the viewpoint of rate distortion. Another major component in the analysis of numerical approximation is the computational time needed to construct the approximation and this in turn is intimately connected with the stability of the approximation algorithm. So the stability of numerical approximation using NNs is a large part of the analysis put forward. The survey, for the most part, is concerned with NNs using the popular ReLU activation function. In this case, the outputs of the NNs are piecewise linear functions on rather complicated partitions of the domain of $f$ into cells that are convex polytopes. When the architecture of the NN is fixed and the parameters are allowed to vary, the set of output functions of the NN is a parameterized nonlinear manifold. It is shown that this manifold has certain space filling properties leading to an increased ability to approximate (better rate distortion) but at the expense of numerical stability. The space filling creates a challenge to the numerical method in finding best or good parameter choices when trying to approximate.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا