ﻻ يوجد ملخص باللغة العربية
We consider the problem of approximating a function in general nonlinear subsets of $L^2$ when only a weighted Monte Carlo estimate of the $L^2$-norm can be computed. Of particular interest in this setting is the concept of sample complexity, the number of samples that are necessary to recover the best approximation. Bounds for this quantity have been derived in a previous work and depend primarily on the model class and are not influenced positively by the regularity of the sought function. This result however is only a worst-case bound and is not able to explain the remarkable performance of iterative hard thresholding algorithms that is observed in practice. We reexamine the results of the previous paper and derive a new bound that is able to utilize the regularity of the sought function. A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical high-dimensional random partial differential equation.
We consider best approximation problems in a nonlinear subset $mathcal{M}$ of a Banach space of functions $(mathcal{V},|bullet|)$. The norm is assumed to be a generalization of the $L^2$-norm for which only a weighted Monte Carlo estimate $|bullet|_n
We introduced the least-squares ReLU neural network (LSNN) method for solving the linear advection-reaction problem with discontinuous solution and showed that the method outperforms mesh-based numerical methods in terms of the number of degrees of f
In this work, we study the tensor ring decomposition and its associated numerical algorithms. We establish a sharp transition of algorithmic difficulty of the optimization problem as the bond dimension increases: On one hand, we show the existence of
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution. The method is a discretization of an equivalent least-squares formulation in the set of neural network function
The alternating least squares algorithm for CP and Tucker decomposition is dominated in cost by the tensor contractions necessary to set up the quadratic optimization subproblems. We introduce a novel family of algorithms that uses perturbative corre