ترغب بنشر مسار تعليمي؟ اضغط هنا

Normalizing field flows: Solving forward and inverse stochastic differential equations using physics-informed flow models

366   0   0.0 ( 0 )
 نشر من قبل Hao Wu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce in this work the normalizing field flows (NFF) for learning random fields from scattered measurements. More precisely, we construct a bijective transformation (a normalizing flow characterizing by neural networks) between a Gaussian random field with the Karhunen-Lo`eve (KL) expansion structure and the target stochastic field, where the KL expansion coefficients and the invertible networks are trained by maximizing the sum of the log-likelihood on scattered measurements. This NFF model can be used to solve data-driven forward, inverse, and mixed forward/inverse stochastic partial differential equations in a unified framework. We demonstrate the capability of the proposed NFF model for learning Non Gaussian processes and different types of stochastic partial differential equations.



قيم البحث

اقرأ أيضاً

Recently, researchers have utilized neural networks to accurately solve partial differential equations (PDEs), enabling the mesh-free method for scientific computation. Unfortunately, the network performance drops when encountering a high nonlinearit y domain. To improve the generalizability, we introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions. The multi-task scheme exploits the benefits of learning shared representations, controlled by cross-stitch modules, between multiple related PDEs, which are obtainable by varying the PDE parameterization coefficients, to generalize better on the original PDE. Encouraging the network pay closer attention to the high nonlinearity domain regions that are more challenging to learn, we also propose adversarial training for generating supplementary high-loss samples, similarly distributed to the original training distribution. In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches in various PDE examples, including high-dimensional stochastic PDEs.
In this study, we employ physics-informed neural networks (PINNs) to solve forward and inverse problems via the Boltzmann-BGK formulation (PINN-BGK), enabling PINNs to model flows in both the continuum and rarefied regimes. In particular, the PINN-BG K is composed of three sub-networks, i.e., the first for approximating the equilibrium distribution function, the second for approximating the non-equilibrium distribution function, and the third one for encoding the Boltzmann-BGK equation as well as the corresponding boundary/initial conditions. By minimizing the residuals of the governing equations and the mismatch between the predicted and provided boundary/initial conditions, we can approximate the Boltzmann-BGK equation for both continuous and rarefied flows. For forward problems, the PINN-BGK is utilized to solve various benchmark flows given boundary/initial conditions, e.g., Kovasznay flow, Taylor-Green flow, cavity flow, and micro Couette flow for Knudsen number up to 5. For inverse problems, we focus on rarefied flows in which accurate boundary conditions are difficult to obtain. We employ the PINN-BGK to infer the flow field in the entire computational domain given a limited number of interior scattered measurements on the velocity with unknown boundary conditions. Results for the two-dimensional micro Couette and micro cavity flows with Knudsen numbers ranging from 0.1 to 10 indicate that the PINN-BGK can infer the velocity field in the entire domain with good accuracy. Finally, we also present some results on using transfer learning to accelerate the training process. Specifically, we can obtain a three-fold speedup compared to the standard training process (e.g., Adam plus L-BFGS-B) for the two-dimensional flow problems considered in our work.
We present a deep learning algorithm for the numerical solution of parametric families of high-dimensional linear Kolmogorov partial differential equations (PDEs). Our method is based on reformulating the numerical approximation of a whole family of Kolmogorov PDEs as a single statistical learning problem using the Feynman-Kac formula. Successful numerical experiments are presented, which empirically confirm the functionality and efficiency of our proposed algorithm in the case of heat equations and Black-Scholes option pricing models parametrized by affine-linear coefficient functions. We show that a single deep neural network trained on simulated data is capable of learning the solution functions of an entire family of PDEs on a full space-time region. Most notably, our numerical observations and theoretical results also demonstrate that the proposed method does not suffer from the curse of dimensionality, distinguishing it from almost all standard numerical methods for PDEs.
280 - Jared OLeary , Joel A. Paulson , 2021
Stochastic differential equations (SDEs) are used to describe a wide variety of complex stochastic dynamical systems. Learning the hidden physics within SDEs is crucial for unraveling fundamental understanding of the stochastic and nonlinear behavior of these systems. We propose a flexible and scalable framework for training deep neural networks to learn constitutive equations that represent hidden physics within SDEs. The proposed stochastic physics-informed neural network framework (SPINN) relies on uncertainty propagation and moment-matching techniques along with state-of-the-art deep learning strategies. SPINN first propagates stochasticity through the known structure of the SDE (i.e., the known physics) to predict the time evolution of statistical moments of the stochastic states. SPINN learns (deep) neural network representations of the hidden physics by matching the predicted moments to those estimated from data. Recent advances in automatic differentiation and mini-batch gradient descent are leveraged to establish the unknown parameters of the neural networks. We demonstrate SPINN on three benchmark in-silico case studies and analyze the frameworks robustness and numerical stability. SPINN provides a promising new direction for systematically unraveling the hidden physics of multivariate stochastic dynamical systems with multiplicative noise.
90 - Wei Peng , Jun Zhang , Weien Zhou 2021
Physics Informed Neural Network (PINN) is a scientific computing framework used to solve both forward and inverse problems modeled by Partial Differential Equations (PDEs). This paper introduces IDRLnet, a Python toolbox for modeling and solving prob lems through PINN systematically. IDRLnet constructs the framework for a wide range of PINN algorithms and applications. It provides a structured way to incorporate geometric objects, data sources, artificial neural networks, loss metrics, and optimizers within Python. Furthermore, it provides functionality to solve noisy inverse problems, variational minimization, and integral differential equations. New PINN variants can be integrated into the framework easily. Source code, tutorials, and documentation are available at url{https://github.com/idrl-lab/idrlnet}.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا