No Arabic abstract
Failure trajectories, identifying the probable failure zones, and damage statistics are some of the key quantities of relevance in brittle fracture applications. High-fidelity numerical solvers that reliably estimate these relevant quantities exist but they are computationally demanding requiring a high resolution of the crack. Moreover, independent intensive simulations need to be carried out even for a small change in domain parameters and/or material properties. Therefore, fast and generalizable surrogate models are needed to alleviate the computational burden but the discontinuous nature of fracture mechanics presents a major challenge to developing such models. We propose a physics-informed variational formulation of DeepONet (V-DeepONet) for brittle fracture analysis. V-DeepONet is trained to map the initial configuration of the defect to the relevant fields of interests (e.g., damage and displacement fields). Once the network is trained, the entire global solution can be rapidly obtained for any initial crack configuration and loading steps on that domain. While the original DeepONet is solely data-driven, we take a different path to train the V-DeepONet by imposing the governing equations in variational form and we also use some labelled data. We demonstrate the effectiveness of V-DeepOnet through two benchmarks of brittle fracture, and we verify its accuracy using results from high-fidelity solvers. Encoding the physical laws and also some data to train the network renders the surrogate model capable of accurately performing both interpolation and extrapolation tasks, considering that fracture modeling is very sensitive to fluctuations. The proposed hybrid training of V-DeepONet is superior to state-of-the-art methods and can be applied to a wide array of dynamical systems with complex responses.
Physics Informed Neural Network (PINN) is a scientific computing framework used to solve both forward and inverse problems modeled by Partial Differential Equations (PDEs). This paper introduces IDRLnet, a Python toolbox for modeling and solving problems through PINN systematically. IDRLnet constructs the framework for a wide range of PINN algorithms and applications. It provides a structured way to incorporate geometric objects, data sources, artificial neural networks, loss metrics, and optimizers within Python. Furthermore, it provides functionality to solve noisy inverse problems, variational minimization, and integral differential equations. New PINN variants can be integrated into the framework easily. Source code, tutorials, and documentation are available at url{https://github.com/idrl-lab/idrlnet}.
Recently, researchers have utilized neural networks to accurately solve partial differential equations (PDEs), enabling the mesh-free method for scientific computation. Unfortunately, the network performance drops when encountering a high nonlinearity domain. To improve the generalizability, we introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions. The multi-task scheme exploits the benefits of learning shared representations, controlled by cross-stitch modules, between multiple related PDEs, which are obtainable by varying the PDE parameterization coefficients, to generalize better on the original PDE. Encouraging the network pay closer attention to the high nonlinearity domain regions that are more challenging to learn, we also propose adversarial training for generating supplementary high-loss samples, similarly distributed to the original training distribution. In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches in various PDE examples, including high-dimensional stochastic PDEs.
We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid approach combining deep learning with probabilistic graphical models (PGMs) that acts as a surrogate for physics-based representations of multiscale and multiphysics systems. GINNs address the twin challenges of removing intrinsic computational bottlenecks in physics-based models and generating large data sets for estimating probability distributions of quantities of interest (QoIs) with a high degree of confidence. Both the selection of the complex physics learned by the NN and its supervised learning/prediction are informed by the PGM, which includes the formulation of structured priors for tunable control variables (CVs) to account for their mutual correlations and ensure physically sound CV and QoI distributions. GINNs accelerate the prediction of QoIs essential for simulation-based decision-making where generating sufficient sample data using physics-based models alone is often prohibitively expensive. Using a real-world application grounded in supercapacitor-based energy storage, we describe the construction of GINNs from a Bayesian network-embedded homogenized model for supercapacitor dynamics, and demonstrate their ability to produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
In recent years, convolutional neural networks (CNNs) have experienced an increasing interest for their ability to perform fast approximation of effective hydrodynamic parameters in porous media research and applications. This paper presents a novel methodology for permeability prediction from micro-CT scans of geological rock samples. The training data set for CNNs dedicated to permeability prediction consists of permeability labels that are typically generated by classical lattice Boltzmann methods (LBM) that simulate the flow through the pore space of the segmented image data. We instead perform direct numerical simulation (DNS) by solving the stationary Stokes equation in an efficient and distributed-parallel manner. As such, we circumvent the convergence issues of LBM that frequently are observed on complex pore geometries, and therefore, improve on the generality and accuracy of our training data set. Using the DNS-computed permeabilities, a physics-informed CNN PhyCNN) is trained by additionally providing a tailored characteristic quantity of the pore space. More precisely, by exploiting the connection to flow problems on a graph representation of the pore space, additional information about confined structures is provided to the network in terms of the maximum flow value, which is the key innovative component of our workflow. As a result, unprecedented prediction accuracy and robustness are observed for a variety of sandstone samples from archetypal rock formations.
We introduce in this work the normalizing field flows (NFF) for learning random fields from scattered measurements. More precisely, we construct a bijective transformation (a normalizing flow characterizing by neural networks) between a Gaussian random field with the Karhunen-Lo`eve (KL) expansion structure and the target stochastic field, where the KL expansion coefficients and the invertible networks are trained by maximizing the sum of the log-likelihood on scattered measurements. This NFF model can be used to solve data-driven forward, inverse, and mixed forward/inverse stochastic partial differential equations in a unified framework. We demonstrate the capability of the proposed NFF model for learning Non Gaussian processes and different types of stochastic partial differential equations.