ترغب بنشر مسار تعليمي؟ اضغط هنا

An exploratory study on machine learning to couple numerical solutions of partial differential equations

97   0   0.0 ( 0 )
 نشر من قبل Hansong Tang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As further progress in the accurate and efficient computation of coupled partial differential equations (PDEs) becomes increasingly difficult, it has become highly desired to develop new methods for such computation. In deviation from conventional approaches, this short communication paper explores a computational paradigm that couples numerical solutions of PDEs via machine-learning (ML) based methods, together with a preliminary study on the paradigm. Particularly, it solves PDEs in subdomains as in a conventional approach but develops and trains artificial neural networks (ANN) to couple the PDEs solutions at their interfaces, leading to solutions to the PDEs in the whole domains. The concepts and algorithms for the ML coupling are discussed using coupled Poisson equations and coupled advection-diffusion equations. Preliminary numerical examples illustrate the feasibility and performance of the ML coupling. Although preliminary, the results of this exploratory study indicate that the ML paradigm is promising and deserves further research.



قيم البحث

اقرأ أيضاً

The numerical solution of differential equations can be formulated as an inference problem to which formal statistical approaches can be applied. However, nonlinear partial differential equations (PDEs) pose substantial challenges from an inferential perspective, most notably the absence of explicit conditioning formula. This paper extends earlier work on linear PDEs to a general class of initial value problems specified by nonlinear PDEs, motivated by problems for which evaluations of the right-hand-side, initial conditions, or boundary conditions of the PDE have a high computational cost. The proposed method can be viewed as exact Bayesian inference under an approximate likelihood, which is based on discretisation of the nonlinear differential operator. Proof-of-concept experimental results demonstrate that meaningful probabilistic uncertainty quantification for the unknown solution of the PDE can be performed, while controlling the number of times the right-hand-side, initial and boundary conditions are evaluated. A suitable prior model for the solution of the PDE is identified using novel theoretical analysis of the sample path properties of Mat{e}rn processes, which may be of independent interest.
This paper proposes Friedrichs learning as a novel deep learning methodology that can learn the weak solutions of PDEs via a minmax formulation, which transforms the PDE problem into a minimax optimization problem to identify weak solutions. The name Friedrichs learning is for highlighting the close relationship between our learning strategy and Friedrichs theory on symmetric systems of PDEs. The weak solution and the test function in the weak formulation are parameterized as deep neural networks in a mesh-free manner, which are alternately updated to approach the optimal solution networks approximating the weak solution and the optimal test function, respectively. Extensive numerical results indicate that our mesh-free method can provide reasonably good solutions to a wide range of PDEs defined on regular and irregular domains in various dimensions, where classical numerical methods such as finite difference methods and finite element methods may be tedious or difficult to be applied.
222 - Yiqi Gu , Haizhao Yang , Chao Zhou 2020
The least squares method with deep neural networks as function parametrization has been applied to solve certain high-dimensional partial differential equations (PDEs) successfully; however, its convergence is slow and might not be guaranteed even wi thin a simple class of PDEs. To improve the convergence of the network-based least squares model, we introduce a novel self-paced learning framework, SelectNet, which quantifies the difficulty of training samples, treats samples equally in the early stage of training, and slowly explores more challenging samples, e.g., samples with larger residual errors, mimicking the human cognitive process for more efficient learning. In particular, a selection network and the PDE solution network are trained simultaneously; the selection network adaptively weighting the training samples of the solution network achieving the goal of self-paced learning. Numerical examples indicate that the proposed SelectNet model outperforms existing models on the convergence speed and the convergence robustness, especially for low-regularity solutions.
In recent years, sparse spectral methods for solving partial differential equations have been derived using hierarchies of classical orthogonal polynomials on intervals, disks, disk-slices and triangles. In this work we extend the methodology to a hi erarchy of non-classical multivariate orthogonal polynomials on spherical caps. The entries of discretisations of partial differential operators can be effectively computed using formulae in terms of (non-classical) univariate orthogonal polynomials. We demonstrate the results on partial differential equations involving the spherical Laplacian and biharmonic operators, showing spectral convergence.
In this paper, we propose third-order semi-discretized schemes in space based on the tempered weighted and shifted Grunwald difference (tempered-WSGD) operators for the tempered fractional diffusion equation. We also show stability and convergence an alysis for the fully discrete scheme based a Crank--Nicolson scheme in time. A third-order scheme for the tempered Black--Scholes equation is also proposed and tested numerically. Some numerical experiments are carried out to confirm accuracy and effectiveness of these proposed methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا