ﻻ يوجد ملخص باللغة العربية
Recently, neural networks have been widely applied for solving partial differential equations. However, the resulting optimization problem brings many challenges for current training algorithms. This manifests itself in the fact that the convergence order that has been proven theoretically cannot be obtained numerically. In this paper, we develop a novel greedy training algorithm for solving PDEs which builds the neural network architecture adaptively. It is the first training algorithm that observes the convergence order of neural networks numerically. This innovative algorithm is tested on several benchmark examples in both 1D and 2D to confirm its efficiency and robustness.
Recent works have shown that deep neural networks can be employed to solve partial differential equations, giving rise to the framework of physics informed neural networks. We introduce a generalization for these methods that manifests as a scaling p
Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in cite{BCDDPW,DPW}, where it is shown
In this paper, we propose forward and backward stochastic differential equations (FBSDEs) based deep neural network (DNN) learning algorithms for the solution of high dimensional quasilinear parabolic partial differential equations (PDEs), which are
Rational exponential integrators (REXI) are a class of numerical methods that are well suited for the time integration of linear partial differential equations with imaginary eigenvalues. Since these methods can be parallelized in time (in addition t
We show that a very simple modification of the Pure Greedy Algorithm for approximating functions by sparse sums from a dictionary in a Hilbert or more generally a Banach space has optimal convergence rates on the class of convex combinations of dictionary elements