Do you want to publish a course? Click here

Approximation with Neural Networks in Variable Lebesgue Spaces

120   0   0.0 ( 0 )
 Added by Jes\\'us Oc\\'ariz
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper concerns the universal approximation property with neural networks in variable Lebesgue spaces. We show that, whenever the exponent function of the space is bounded, every function can be approximated with shallow neural networks with any desired accuracy. This result subsequently leads to determine the universality of the approximation depending on the boundedness of the exponent function. Furthermore, whenever the exponent is unbounded, we obtain some characterization results for the subspace of functions that can be approximated.

rate research

Read More

We study the expressivity of deep neural networks. Measuring a networks complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of skip connections does not change the resulting approximation spaces. We also discuss the role of the networks nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.
In this paper, we introduce the Hausdorff operator associated with the Opdam--Cherednik transform and study the boundedness of this operator in various Lebesgue spaces. In particular, we prove the boundedness of the Hausdorff operator in Lebesgue spaces, in grand Lebesgue spaces, and in quasi-Banach spaces that are associated with the Opdam--Cherednik transform. Also, we give necessary and sufficient conditions for the boundedness of the Hausdorff operator in these spaces.
104 - Felix Voigtlaender 2020
We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks. Precisely, we consider feedforward networks with a complex activation function $sigma : mathbb{C} to mathbb{C}$ in which each neuron performs the operation $mathbb{C}^N to mathbb{C}, z mapsto sigma(b + w^T z)$ with weights $w in mathbb{C}^N$ and a bias $b in mathbb{C}$, and with $sigma$ applied componentwise. We completely characterize those activation functions $sigma$ for which the associated complex networks have the universal approximation property, meaning that they can uniformly approximate any continuous function on any compact subset of $mathbb{C}^d$ arbitrarily well. Unlike the classical case of real networks, the set of good activation functions which give rise to networks with the universal approximation property differs significantly depending on whether one considers deep networks or shallow networks: For deep networks with at least two hidden layers, the universal approximation property holds as long as $sigma$ is neither a polynomial, a holomorphic function, or an antiholomorphic function. Shallow networks, on the other hand, are universal if and only if the real part or the imaginary part of $sigma$ is not a polyharmonic function.
We approximate functions defined on smooth bounded domains by elements of the eigenspaces of the Laplacian or the Stokes operator in such a way that the approximations are bounded and converge in both Sobolev and Lebesgue spaces. We prove an abstract result referred to fractional power spaces of positive, self-adjoint, compact-inverse operators on Hilbert spaces, and then obtain our main result by using the explicit form of these fractional power spaces for the Dirichlet Laplacian and Stokes operators. As a simple application, we prove that all weak solutions of the incompressible convective Brinkman--Forchheimer equations posed on a bounded domain in ${mathbb R}^3$ satisfy the energy equality.
91 - Yunfei Yang , Zhen Li , Yang Wang 2020
We study the expressive power of deep ReLU neural networks for approximating functions in dilated shift-invariant spaces, which are widely used in signal processing, image processing, communications and so on. Approximation error bounds are estimated with respect to the width and depth of neural networks. The network construction is based on the bit extraction and data-fitting capacity of deep neural networks. As applications of our main results, the approximation rates of classical function spaces such as Sobolev spaces and Besov spaces are obtained. We also give lower bounds of the $L^p (1le p le infty)$ approximation error for Sobolev spaces, which show that our construction of neural network is asymptotically optimal up to a logarithmic factor.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا