Do you want to publish a course? Click here

Physics-informed Neural Networks for Elliptic Partial Differential Equations on 3D Manifolds

86   0   0.0 ( 0 )
 Added by Zhuojia Fu Prof.
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Motivated by recent research on Physics-Informed Neural Networks (PINNs), we make the first attempt to introduce the PINNs for numerical simulation of the elliptic Partial Differential Equations (PDEs) on 3D manifolds. PINNs are one of the deep learning-based techniques. Based on the data and physical models, PINNs introduce the standard feedforward neural networks (NNs) to approximate the solutions to the PDE systems. By using automatic differentiation, the PDEs system could be explicitly encoded into NNs and consequently, the sum of mean squared residuals from PDEs could be minimized with respect to the NN parameters. In this study, the residual in the loss function could be constructed validly by using the automatic differentiation because of the relationship between the surface differential operators $ abla_S/Delta_S$ and the standard Euclidean differential operators $ abla/Delta$. We first consider the unit sphere as surface to investigate the numerical accuracy and convergence of the PINNs with different training example sizes and the depth of the NNs. Another examples are provided with different complex manifolds to verify the robustness of the PINNs.



rate research

Read More

Recently, researchers have utilized neural networks to accurately solve partial differential equations (PDEs), enabling the mesh-free method for scientific computation. Unfortunately, the network performance drops when encountering a high nonlinearity domain. To improve the generalizability, we introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions. The multi-task scheme exploits the benefits of learning shared representations, controlled by cross-stitch modules, between multiple related PDEs, which are obtainable by varying the PDE parameterization coefficients, to generalize better on the original PDE. Encouraging the network pay closer attention to the high nonlinearity domain regions that are more challenging to learn, we also propose adversarial training for generating supplementary high-loss samples, similarly distributed to the original training distribution. In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches in various PDE examples, including high-dimensional stochastic PDEs.
Solving general high-dimensional partial differential equations (PDE) is a long-standing challenge in numerical mathematics. In this paper, we propose a novel approach to solve high-dimensional linear and nonlinear PDEs defined on arbitrary domains by leveraging their weak formulations. We convert the problem of finding the weak solution of PDEs into an operator norm minimization problem induced from the weak formulation. The weak solution and the test function in the weak formulation are then parameterized as the primal and adversarial networks respectively, which are alternately updated to approximate the optimal network parameter setting. Our approach, termed as the weak adversarial network (WAN), is fast, stable, and completely mesh-free, which is particularly suitable for high-dimensional PDEs defined on irregular domains where the classical numerical methods based on finite differences and finite elements suffer the issues of slow computation, instability and the curse of dimensionality. We apply our method to a variety of test problems with high-dimensional PDEs to demonstrate its promising performance.
Convergence of an adaptive collocation method for the stationary parametric diffusion equation with finite-dimensional affine coefficient is shown. The adaptive algorithm relies on a recently introduced residual-based reliable a posteriori error estimator. For the convergence proof, a strategy recently used for a stochastic Galerkin method with an hierarchical error estimator is transferred to the collocation setting. Extensions to other variants of adaptive collocation methods (including the classical one proposed in the paper Dimension-adaptive tensor-product quadratuture Computing (2003) by T. Gerstner and M. Griebel) is explored.
In recent years, sparse spectral methods for solving partial differential equations have been derived using hierarchies of classical orthogonal polynomials on intervals, disks, disk-slices and triangles. In this work we extend the methodology to a hierarchy of non-classical multivariate orthogonal polynomials on spherical caps. The entries of discretisations of partial differential operators can be effectively computed using formulae in terms of (non-classical) univariate orthogonal polynomials. We demonstrate the results on partial differential equations involving the spherical Laplacian and biharmonic operators, showing spectral convergence.
In this paper, we propose third-order semi-discretized schemes in space based on the tempered weighted and shifted Grunwald difference (tempered-WSGD) operators for the tempered fractional diffusion equation. We also show stability and convergence analysis for the fully discrete scheme based a Crank--Nicolson scheme in time. A third-order scheme for the tempered Black--Scholes equation is also proposed and tested numerically. Some numerical experiments are carried out to confirm accuracy and effectiveness of these proposed methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا