ترغب بنشر مسار تعليمي؟ اضغط هنا

Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks

98   0   0.0 ( 0 )
 نشر من قبل N. Sukumar
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we introduce a new approach based on distance fields to exactly impose boundary conditions in physics-informed deep neural networks. The challenges in satisfying Dirichlet boundary conditions in meshfree and particle methods are well-known. This issue is also pertinent in the development of physics informed neural networks (PINN) for the solution of partial differential equations. We introduce geometry-aware trial functions in artifical neural networks to improve the training in deep learning for partial differential equations. To this end, we use concepts from constructive solid geometry (R-functions) and generalized barycentric coordinates (mean value potential fields) to construct $phi$, an approximate distance function to the boundary of a domain. To exactly impose homogeneous Dirichlet boundary conditions, the trial function is taken as $phi$ multiplied by the PINN approximation, and its generalization via transfinite interpolation is used to a priori satisfy inhomogeneous Dirichlet (essential), Neumann (natural), and Robin boundary conditions on complex geometries. In doing so, we eliminate modeling error associated with the satisfaction of boundary conditions in a collocation method and ensure that kinematic admissibility is met pointwise in a Ritz method. We present numerical solutions for linear and nonlinear boundary-value problems over domains with affine and curved boundaries. Benchmark problems in 1D for linear elasticity, advection-diffusion, and beam bending; and in 2D for the Poisson equation, biharmonic equation, and the nonlinear Eikonal equation are considered. The approach extends to higher dimensions, and we showcase its use by solving a Poisson problem with homogeneneous Dirichlet boundary conditions over the 4D hypercube. This study provides a pathway for meshfree analysis to be conducted on the exact geometry without domain discretization.



قيم البحث

اقرأ أيضاً

Motivated by recent research on Physics-Informed Neural Networks (PINNs), we make the first attempt to introduce the PINNs for numerical simulation of the elliptic Partial Differential Equations (PDEs) on 3D manifolds. PINNs are one of the deep learn ing-based techniques. Based on the data and physical models, PINNs introduce the standard feedforward neural networks (NNs) to approximate the solutions to the PDE systems. By using automatic differentiation, the PDEs system could be explicitly encoded into NNs and consequently, the sum of mean squared residuals from PDEs could be minimized with respect to the NN parameters. In this study, the residual in the loss function could be constructed validly by using the automatic differentiation because of the relationship between the surface differential operators $ abla_S/Delta_S$ and the standard Euclidean differential operators $ abla/Delta$. We first consider the unit sphere as surface to investigate the numerical accuracy and convergence of the PINNs with different training example sizes and the depth of the NNs. Another examples are provided with different complex manifolds to verify the robustness of the PINNs.
180 - Yulei Liao , Pingbing Ming 2019
We propose a new method to deal with the essential boundary conditions encountered in the deep learning-based numerical solvers for partial differential equations. The trial functions representing by deep neural networks are non-interpolatory, which makes the enforcement of the essential boundary conditions a nontrivial matter. Our method resorts to Nitsches variational formulation to deal with this difficulty, which is consistent, and does not require significant extra computational costs. We prove the error estimate in the energy norm and illustrate the method on several representative problems posed in at most 100 dimension.
We propose two different discrete formulations for the weak imposition of the Neumann boundary conditions of the Darcy flow. The Raviart-Thomas mixed finite element on both triangular and quadrilateral meshes is considered for both methods. One is a consistent discretization depending on a weighting parameter scaling as $mathcal O(h^{-1})$, while the other is a penalty-type formulation obtained as the discretization of a perturbation of the original problem and relies on a parameter scaling as $mathcal O(h^{-k-1})$, $k$ being the order of the Raviart-Thomas space. We rigorously prove that both methods are stable and result in optimal convergent numerical schemes with respect to appropriate mesh-dependent norms, although the chosen norms do not scale as the usual $L^2$-norm. However, we are still able to recover the optimal a priori $L^2$-error estimates for the velocity field, respectively, for high-order and the lowest-order Raviart-Thomas discretizations, for the first and second numerical schemes. Finally, some numerical examples validating the theory are exhibited.
Physics-informed neural network (PINN) is a data-driven approach to solve equations. It is successful in many applications; however, the accuracy of the PINN is not satisfactory when it is used to solve multiscale equations. Homogenization is a w ay of approximating a multiscale equation by a homogenized equation without multiscale property; it includes solving cell problems and the homogenized equation. The cell problems are periodic; and we propose an oversampling strategy which greatly improves the PINN accuracy on periodic problems. The homogenized equation has constant or slow dependency coefficient and can also be solved by PINN accurately. We hence proposed a 3-step method to improve the PINN accuracy for solving multiscale problems with the help of the homogenization. We apply our method to solve three equations which represent three different homogenization. The results show that the proposed method greatly improves the PINN accuracy. Besides, we also find that the PINN aided homogenization may achieve better accuracy than the numerical methods driven homogenization; PINN hence is a potential alternative to implementing the homogenization.
We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid approach combining deep learning with probabilistic graphical models (PGMs) that acts as a surrogate for physics-based representations of multiscale and multiphysics systems . GINNs address the twin challenges of removing intrinsic computational bottlenecks in physics-based models and generating large data sets for estimating probability distributions of quantities of interest (QoIs) with a high degree of confidence. Both the selection of the complex physics learned by the NN and its supervised learning/prediction are informed by the PGM, which includes the formulation of structured priors for tunable control variables (CVs) to account for their mutual correlations and ensure physically sound CV and QoI distributions. GINNs accelerate the prediction of QoIs essential for simulation-based decision-making where generating sufficient sample data using physics-based models alone is often prohibitively expensive. Using a real-world application grounded in supercapacitor-based energy storage, we describe the construction of GINNs from a Bayesian network-embedded homogenized model for supercapacitor dynamics, and demonstrate their ability to produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا