ﻻ يوجد ملخص باللغة العربية
This paper concerns the universal approximation property with neural networks in variable Lebesgue spaces. We show that, whenever the exponent function of the space is bounded, every function can be approximated with shallow neural networks with any desired accuracy. This result subsequently leads to determine the universality of the approximation depending on the boundedness of the exponent function. Furthermore, whenever the exponent is unbounded, we obtain some characterization results for the subspace of functions that can be approximated.
We study the expressivity of deep neural networks. Measuring a networks complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complex
In this paper, we introduce the Hausdorff operator associated with the Opdam--Cherednik transform and study the boundedness of this operator in various Lebesgue spaces. In particular, we prove the boundedness of the Hausdorff operator in Lebesgue spa
We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks. Precisely, we consider feedforward networks with a complex activation function $sigma : mathbb{C} to mathbb{C}$ in which ea
We approximate functions defined on smooth bounded domains by elements of the eigenspaces of the Laplacian or the Stokes operator in such a way that the approximations are bounded and converge in both Sobolev and Lebesgue spaces. We prove an abstract
We study the expressive power of deep ReLU neural networks for approximating functions in dilated shift-invariant spaces, which are widely used in signal processing, image processing, communications and so on. Approximation error bounds are estimated