ترغب بنشر مسار تعليمي؟ اضغط هنا

An efficient greedy training algorithm for neural networks and applications in PDEs

191   0   0.0 ( 0 )
 نشر من قبل Jonathan Siegel
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, neural networks have been widely applied for solving partial differential equations. However, the resulting optimization problem brings many challenges for current training algorithms. This manifests itself in the fact that the convergence order that has been proven theoretically cannot be obtained numerically. In this paper, we develop a novel greedy training algorithm for solving PDEs which builds the neural network architecture adaptively. It is the first training algorithm that observes the convergence order of neural networks numerically. This innovative algorithm is tested on several benchmark examples in both 1D and 2D to confirm its efficiency and robustness.



قيم البحث

اقرأ أيضاً

Recent works have shown that deep neural networks can be employed to solve partial differential equations, giving rise to the framework of physics informed neural networks. We introduce a generalization for these methods that manifests as a scaling p arameter which balances the relative importance of the different constraints imposed by partial differential equations. A mathematical motivation of these generalized methods is provided, which shows that for linear and well-posed partial differential equations, the functional form is convex. We then derive a choice for the scaling parameter that is optimal with respect to a measure of relative error. Because this optimal choice relies on having full knowledge of analytical solutions, we also propose a heuristic method to approximate this optimal choice. The proposed methods are compared numerically to the original methods on a variety of model partial differential equations, with the number of data points being updated adaptively. For several problems, including high-dimensional PDEs the proposed methods are shown to significantly enhance accuracy.
104 - Albert Cohen , Wolfgang Dahmen , 2018
Reduced bases have been introduced for the approximation of parametrized PDEs in applications where many online queries are required. Their numerical efficiency for such problems has been theoretically confirmed in cite{BCDDPW,DPW}, where it is shown that the reduced basis space $V_n$ of dimension $n$, constructed by a certain greedy strategy, has approximation error similar to that of the optimal space associated to the Kolmogorov $n$-width of the solution manifold. The greedy construction of the reduced basis space is performed in an offline stage which requires at each step a maximization of the current error over the parameter space. For the purpose of numerical computation, this maximization is performed over a finite {em training set} obtained through a discretization. of the parameter domain. To guarantee a final approximation error $varepsilon$ for the space generated by the greedy algorithm requires in principle that the snapshots associated to this training set constitute an approximation net for the solution manifold with accuracy or order $varepsilon$. Hence, the size of the training set is the $varepsilon$ covering number for $mathcal{M}$ and this covering number typically behaves like $exp(Cvarepsilon^{-1/s})$ for some $C>0$ when the solution manifold has $n$-width decay $O(n^{-s})$. Thus, the shear size of the training set prohibits implementation of the algorithm when $varepsilon$ is small. The main result of this paper shows that, if one is willing to accept results which hold with high probability, rather than with certainty, then for a large class of relevant problems one may replace the fine discretization by a random training set of size polynomial in $varepsilon^{-1}$. Our proof of this fact is established by using inverse inequalities for polynomials in high dimensions.
62 - Wenzhong Zhang , Wei Cai 2020
In this paper, we propose forward and backward stochastic differential equations (FBSDEs) based deep neural network (DNN) learning algorithms for the solution of high dimensional quasilinear parabolic partial differential equations (PDEs), which are related to the FBSDEs by the Pardoux-Peng theory. The algorithms rely on a learning process by minimizing the pathwise difference between two discrete stochastic processes, defined by the time discretization of the FBSDEs and the DNN representation of the PDE solutions, respectively. The proposed algorithms are shown to generate DNN solutions for a 100-dimensional Black--Scholes--Barenblatt equation, accurate in a finite region in the solution space, and has a convergence rate similar to that of the Euler--Maruyama discretization used for the FBSDEs. As a result, a Richardson extrapolation technique over time discretizations can be used to enhance the accuracy of the DNN solutions. For time oscillatory solutions, a multiscale DNN is shown to improve the performance of the FBSDE DNN for high frequencies.
Rational exponential integrators (REXI) are a class of numerical methods that are well suited for the time integration of linear partial differential equations with imaginary eigenvalues. Since these methods can be parallelized in time (in addition t o the spatial parallelization that is commonly performed) they are well suited to exploit modern high performance computing systems. In this paper, we propose a novel REXI scheme that drastically improves accuracy and efficiency. The chosen approach will also allow us to easily determine how many terms are required in the approximation in order to obtain accurate results. We provide comparative numerical simulations for a shallow water equation that highlight the efficiency of our approach and demonstrate that REXI schemes can be efficiently implemented on graphic processing units.
152 - Guergana Petrova 2015
We show that a very simple modification of the Pure Greedy Algorithm for approximating functions by sparse sums from a dictionary in a Hilbert or more generally a Banach space has optimal convergence rates on the class of convex combinations of dictionary elements
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا