ﻻ يوجد ملخص باللغة العربية
While it is widely known that neural networks are universal approximators of continuous functions, a less known and perhaps more powerful result is that a neural network with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data. However, the theorem guarantees only a small approximation error for a sufficient large network, and does not consider the important optimization and generalization errors. To realize this theorem in practice, we propose deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset. A DeepONet consists of two sub-networks, one for encoding the input function at a fixed number of sensors $x_i, i=1,dots,m$ (branch net), and another for encoding the locations for the output functions (trunk net). We perform systematic simulations for identifying two types of operators, i.e., dynamic systems and partial differential equations, and demonstrate that DeepONet significantly reduces the generalization error compared to the fully-connected networks. We also derive theoretically the dependence of the approximation error in terms of the number of sensors (where the input function is defined) as well as the input function type, and we verify the theorem with computational results. More importantly, we observe high-order error convergence in our computational tests, namely polynomial rates (from half order to fourth order) and even exponential convergence with respect to the training dataset size.
For a constant coefficient partial differential operator $P(D)$ with a single characteristic direction such as the time-dependent free Schrodinger operator as well as non-degenerate parabolic differential operators like the heat operator we character
State of the art deep learning models have made steady progress in the fields of computer vision and natural language processing, at the expense of growing model sizes and computational complexity. Deploying these models on low power and mobile devic
Modelling functions of sets, or equivalently, permutation-invariant functions, is a long-standing challenge in machine learning. Deep Sets is a popular method which is known to be a universal approximator for continuous set functions. We provide a th
The quaternionic spectral theorem has already been considered in the literature, see e.g. [22], [31], [32], however, except for the finite dimensional case in which the notion of spectrum is associated to an eigenvalue problem, see [21], it is not sp
In this paper we develop the calculus of pseudo-differential operators corresponding to the quantizations of the form $$ Au(x)=int_{mathbb{R}^n}int_{mathbb{R}^n}e^{i(x-y)cdotxi}sigma(x+tau(y-x),xi)u(y)dydxi, $$ where $tau:mathbb{R}^ntomathbb{R}^n$ is