ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics-informed graph neural Galerkin networks: A unified framework for solving PDE-governed forward and inverse problems

88   0   0.0 ( 0 )
 نشر من قبل Jian-Xun Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite the great promise of the physics-informed neural networks (PINNs) in solving forward and inverse problems, several technical challenges are present as roadblocks for more complex and realistic applications. First, most existing PINNs are based on point-wise formulation with fully-connected networks to learn continuous functions, which suffer from poor scalability and hard boundary enforcement. Second, the infinite search space over-complicates the non-convex optimization for network training. Third, although the convolutional neural network (CNN)-based discrete learning can significantly improve training efficiency, CNNs struggle to handle irregular geometries with unstructured meshes. To properly address these challenges, we present a novel discrete PINN framework based on graph convolutional network (GCN) and variational structure of PDE to solve forward and inverse partial differential equations (PDEs) in a unified manner. The use of a piecewise polynomial basis can reduce the dimension of search space and facilitate training and convergence. Without the need of tuning penalty parameters in classic PINNs, the proposed method can strictly impose boundary conditions and assimilate sparse data in both forward and inverse settings. The flexibility of GCNs is leveraged for irregular geometries with unstructured meshes. The effectiveness and merit of the proposed method are demonstrated over a variety of forward and inverse computational mechanics problems governed by both linear and nonlinear PDEs.

قيم البحث

اقرأ أيضاً

We propose a Bayesian physics-informed neural network (B-PINN) to solve both forward and inverse nonlinear problems described by partial differential equations (PDEs) and noisy data. In this Bayesian framework, the Bayesian neural network (BNN) combi ned with a PINN for PDEs serves as the prior while the Hamiltonian Monte Carlo (HMC) or the variational inference (VI) could serve as an estimator of the posterior. B-PINNs make use of both physical laws and scattered noisy measurements to provide predictions and quantify the aleatoric uncertainty arising from the noisy data in the Bayesian framework. Compared with PINNs, in addition to uncertainty quantification, B-PINNs obtain more accurate predictions in scenarios with large noise due to their capability of avoiding overfitting. We conduct a systematic comparison between the two different approaches for the B-PINN posterior estimation (i.e., HMC or VI), along with dropout used for quantifying uncertainty in deep neural networks. Our experiments show that HMC is more suitable than VI for the B-PINNs posterior estimation, while dropout employed in PINNs can hardly provide accurate predictions with reasonable uncertainty. Finally, we replace the BNN in the prior with a truncated Karhunen-Lo`eve (KL) expansion combined with HMC or a deep normalizing flow (DNF) model as posterior estimators. The KL is as accurate as BNN and much faster but this framework cannot be easily extended to high-dimensional problems unlike the BNN based framework.
In this study, we employ physics-informed neural networks (PINNs) to solve forward and inverse problems via the Boltzmann-BGK formulation (PINN-BGK), enabling PINNs to model flows in both the continuum and rarefied regimes. In particular, the PINN-BG K is composed of three sub-networks, i.e., the first for approximating the equilibrium distribution function, the second for approximating the non-equilibrium distribution function, and the third one for encoding the Boltzmann-BGK equation as well as the corresponding boundary/initial conditions. By minimizing the residuals of the governing equations and the mismatch between the predicted and provided boundary/initial conditions, we can approximate the Boltzmann-BGK equation for both continuous and rarefied flows. For forward problems, the PINN-BGK is utilized to solve various benchmark flows given boundary/initial conditions, e.g., Kovasznay flow, Taylor-Green flow, cavity flow, and micro Couette flow for Knudsen number up to 5. For inverse problems, we focus on rarefied flows in which accurate boundary conditions are difficult to obtain. We employ the PINN-BGK to infer the flow field in the entire computational domain given a limited number of interior scattered measurements on the velocity with unknown boundary conditions. Results for the two-dimensional micro Couette and micro cavity flows with Knudsen numbers ranging from 0.1 to 10 indicate that the PINN-BGK can infer the velocity field in the entire domain with good accuracy. Finally, we also present some results on using transfer learning to accelerate the training process. Specifically, we can obtain a three-fold speedup compared to the standard training process (e.g., Adam plus L-BFGS-B) for the two-dimensional flow problems considered in our work.
Partial differential equations are central to describing many physical phenomena. In many applications these phenomena are observed through a sensor network, with the aim of inferring their underlying properties. Leveraging from certain results in sa mpling and approximation theory, we present a new framework for solving a class of inverse source problems for physical fields governed by linear partial differential equations. Specifically, we demonstrate that the unknown field sources can be recovered from a sequence of, so called, generalised measurements by using multidimensional frequency estimation techniques. Next we show that---for physics-driven fields---this sequence of generalised measurements can be estimated by computing a linear weighted-sum of the sensor measurements; whereby the exact weights (of the sums) correspond to those that reproduce multidimensional exponentials, when used to linearly combine translates of a particular prototype function related to the Greens function of our underlying field. Explicit formulae are then derived for the sequence of weights, that map sensor samples to the exact sequence of generalised measurements when the Greens function satisfies the generalised Strang-Fix condition. Otherwise, the same mapping yields a close approximation of the generalised measurements. Based on this new framework we develop practical, noise robust, sensor network strategies for solving the inverse source problem, and then present numerical simulation results to verify their performance.
The Alternating Direction Method of Multipliers (ADMM) provides a natural way of solving inverse problems with multiple partial differential equations (PDE) forward models and nonsmooth regularization. ADMM allows splitting these large-scale inverse problems into smaller, simpler sub-problems, for which computationally efficient solvers are available. In particular, we apply large-scale second-order optimization methods to solve the fully-decoupled Tikhonov regularized inverse problems stemming from each PDE forward model. We use fast proximal methods to handle the nonsmooth regularization term. In this work, we discuss several adaptations (such as the choice of the consensus norm) needed to maintain consistency with the underlining infinite-dimensional problem. We present two imaging applications inspired by electrical impedance tomography and quantitative photoacoustic tomography to demonstrate the proposed methods effectiveness.
We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid approach combining deep learning with probabilistic graphical models (PGMs) that acts as a surrogate for physics-based representations of multiscale and multiphysics systems . GINNs address the twin challenges of removing intrinsic computational bottlenecks in physics-based models and generating large data sets for estimating probability distributions of quantities of interest (QoIs) with a high degree of confidence. Both the selection of the complex physics learned by the NN and its supervised learning/prediction are informed by the PGM, which includes the formulation of structured priors for tunable control variables (CVs) to account for their mutual correlations and ensure physically sound CV and QoI distributions. GINNs accelerate the prediction of QoIs essential for simulation-based decision-making where generating sufficient sample data using physics-based models alone is often prohibitively expensive. Using a real-world application grounded in supercapacitor-based energy storage, we describe the construction of GINNs from a Bayesian network-embedded homogenized model for supercapacitor dynamics, and demonstrate their ability to produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا