ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics-informed neural networks with hard constraints for inverse design

389   0   0.0 ( 0 )
 نشر من قبل Lu Lu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Inverse design arises in a variety of areas in engineering such as acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics. Topology optimization is a major form of inverse design, where we optimize a designed geometry to achieve targeted properties and the geometry is parameterized by a density function. This optimization is challenging, because it has a very high dimensionality and is usually constrained by partial differential equations (PDEs) and additional inequalities. Here, we propose a new deep learning method -- physics-informed neural networks with hard constraints (hPINNs) -- for solving topology optimization. hPINN leverages the recent development of PINNs for solving PDEs, and thus does not rely on any numerical PDE solver. However, all the constraints in PINNs are soft constraints, and hence we impose hard constraints by using the penalty method and the augmented Lagrangian method. We demonstrate the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow. We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique. Moreover, the implementation of inverse design with hPINN can be easier than that of conventional methods.



قيم البحث

اقرأ أيضاً

Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for whi ch there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
90 - Guofei Pang , Lu Lu , 2018
Physics-informed neural networks (PINNs) are effective in solving integer-order partial differential equations (PDEs) based on scattered and noisy data. PINNs employ standard feedforward neural networks (NNs) with the PDEs explicitly encoded into the NN using automatic differentiation, while the sum of the mean-squared PDE-residuals and the mean-squared error in initial/boundary conditions is minimized with respect to the NN parameters. We extend PINNs to fractional PINNs (fPINNs) to solve space-time fractional advection-diffusion equations (fractional ADEs), and we demonstrate their accuracy and effectiveness in solving multi-dimensional forward and inverse problems with forcing terms whose values are only known at randomly scattered spatio-temporal coordinates (black-box forcing terms). A novel element of the fPINNs is the hybrid approach that we introduce for constructing the residual in the loss function using both automatic differentiation for the integer-order operators and numerical discretization for the fractional operators. We consider 1D time-dependent fractional ADEs and compare white-box (WB) and black-box (BB) forcing. We observe that for the BB forcing fPINNs outperform FDM. Subsequently, we consider multi-dimensional time-, space-, and space-time-fractional ADEs using the directional fractional Laplacian and we observe relative errors of $10^{-4}$. Finally, we solve several inverse problems in 1D, 2D, and 3D to identify the fractional orders, diffusion coefficients, and transport velocities and obtain accurate results even in the presence of significant noise.
We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid approach combining deep learning with probabilistic graphical models (PGMs) that acts as a surrogate for physics-based representations of multiscale and multiphysics systems . GINNs address the twin challenges of removing intrinsic computational bottlenecks in physics-based models and generating large data sets for estimating probability distributions of quantities of interest (QoIs) with a high degree of confidence. Both the selection of the complex physics learned by the NN and its supervised learning/prediction are informed by the PGM, which includes the formulation of structured priors for tunable control variables (CVs) to account for their mutual correlations and ensure physically sound CV and QoI distributions. GINNs accelerate the prediction of QoIs essential for simulation-based decision-making where generating sufficient sample data using physics-based models alone is often prohibitively expensive. Using a real-world application grounded in supercapacitor-based energy storage, we describe the construction of GINNs from a Bayesian network-embedded homogenized model for supercapacitor dynamics, and demonstrate their ability to produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
In this study, we employ physics-informed neural networks (PINNs) to solve forward and inverse problems via the Boltzmann-BGK formulation (PINN-BGK), enabling PINNs to model flows in both the continuum and rarefied regimes. In particular, the PINN-BG K is composed of three sub-networks, i.e., the first for approximating the equilibrium distribution function, the second for approximating the non-equilibrium distribution function, and the third one for encoding the Boltzmann-BGK equation as well as the corresponding boundary/initial conditions. By minimizing the residuals of the governing equations and the mismatch between the predicted and provided boundary/initial conditions, we can approximate the Boltzmann-BGK equation for both continuous and rarefied flows. For forward problems, the PINN-BGK is utilized to solve various benchmark flows given boundary/initial conditions, e.g., Kovasznay flow, Taylor-Green flow, cavity flow, and micro Couette flow for Knudsen number up to 5. For inverse problems, we focus on rarefied flows in which accurate boundary conditions are difficult to obtain. We employ the PINN-BGK to infer the flow field in the entire computational domain given a limited number of interior scattered measurements on the velocity with unknown boundary conditions. Results for the two-dimensional micro Couette and micro cavity flows with Knudsen numbers ranging from 0.1 to 10 indicate that the PINN-BGK can infer the velocity field in the entire domain with good accuracy. Finally, we also present some results on using transfer learning to accelerate the training process. Specifically, we can obtain a three-fold speedup compared to the standard training process (e.g., Adam plus L-BFGS-B) for the two-dimensional flow problems considered in our work.
Physics-informed neural networks (PINNs) encode physical conservation laws and prior physical knowledge into the neural networks, ensuring the correct physics is represented accurately while alleviating the need for supervised learning to a great deg ree. While effective for relatively short-term time integration, when long time integration of the time-dependent PDEs is sought, the time-space domain may become arbitrarily large and hence training of the neural network may become prohibitively expensive. To this end, we develop a parareal physics-informed neural network (PPINN), hence decomposing a long-time problem into many independent short-time problems supervised by an inexpensive/fast coarse-grained (CG) solver. In particular, the serial CG solver is designed to provide approximate predictions of the solution at discrete times, while initiate many fine PINNs simultaneously to correct the solution iteratively. There is a two-fold benefit from training PINNs with small-data sets rather than working on a large-data set directly, i.e., training of individual PINNs with small-data is much faster, while training the fine PINNs can be readily parallelized. Consequently, compared to the original PINN approach, the proposed PPINN approach may achieve a significant speedup for long-time integration of PDEs, assuming that the CG solver is fast and can provide reasonable predictions of the solution, hence aiding the PPINN solution to converge in just a few iterations. To investigate the PPINN performance on solving time-dependent PDEs, we first apply the PPINN to solve the Burgers equation, and subsequently we apply the PPINN to solve a two-dimensional nonlinear diffusion-reaction equation. Our results demonstrate that PPINNs converge in a couple of iterations with significant speed-ups proportional to the number of time-subdomains employed.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا