Conventional artificial neural networks are powerful tools in science and industry, but they can fail when applied to nonlinear systems where order and chaos coexist. We use neural networks that incorporate the structures and symmetries of Hamiltonian dynamics to predict phase space trajectories even as nonlinear systems transition from order to chaos. We demonstrate Hamiltonian neural networks on the canonical Henon-Heiles system, which models diverse dynamics from astrophysics to chemistry. The power of the technique and the ubiquity of chaos suggest widespread utility.
Sampling complex free energy surfaces is one of the main challenges of modern atomistic simulation methods. The presence of kinetic bottlenecks in such surfaces often renders a direct approach useless. A popular strategy is to identify a small number of key collective variables and to introduce a bias potential that is able to favor their fluctuations in order to accelerate sampling. Here we propose to use machine learning techniques in conjunction with the recent variationally enhanced sampling method [Valsson and Parrinello, Physical Review Letters 2014] to determine such potential. This is achieved by expressing the bias as a neural network. The parameters are determined in a reinforcement learning scheme aimed at minimizing the variationally enhanced sampling functional. This required the development of a new and more efficient minimization technique. The expressivity of neural networks allows accelerating sampling in systems with rapidly varying free energy surfaces, removing boundary effects artifacts, and making one more step towards being able to handle several collective variables.
Physics-informed neural networks (PINNs) are effective in solving integer-order partial differential equations (PDEs) based on scattered and noisy data. PINNs employ standard feedforward neural networks (NNs) with the PDEs explicitly encoded into the NN using automatic differentiation, while the sum of the mean-squared PDE-residuals and the mean-squared error in initial/boundary conditions is minimized with respect to the NN parameters. We extend PINNs to fractional PINNs (fPINNs) to solve space-time fractional advection-diffusion equations (fractional ADEs), and we demonstrate their accuracy and effectiveness in solving multi-dimensional forward and inverse problems with forcing terms whose values are only known at randomly scattered spatio-temporal coordinates (black-box forcing terms). A novel element of the fPINNs is the hybrid approach that we introduce for constructing the residual in the loss function using both automatic differentiation for the integer-order operators and numerical discretization for the fractional operators. We consider 1D time-dependent fractional ADEs and compare white-box (WB) and black-box (BB) forcing. We observe that for the BB forcing fPINNs outperform FDM. Subsequently, we consider multi-dimensional time-, space-, and space-time-fractional ADEs using the directional fractional Laplacian and we observe relative errors of $10^{-4}$. Finally, we solve several inverse problems in 1D, 2D, and 3D to identify the fractional orders, diffusion coefficients, and transport velocities and obtain accurate results even in the presence of significant noise.
Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for which there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
We introduce the concept of a Graph-Informed Neural Network (GINN), a hybrid approach combining deep learning with probabilistic graphical models (PGMs) that acts as a surrogate for physics-based representations of multiscale and multiphysics systems. GINNs address the twin challenges of removing intrinsic computational bottlenecks in physics-based models and generating large data sets for estimating probability distributions of quantities of interest (QoIs) with a high degree of confidence. Both the selection of the complex physics learned by the NN and its supervised learning/prediction are informed by the PGM, which includes the formulation of structured priors for tunable control variables (CVs) to account for their mutual correlations and ensure physically sound CV and QoI distributions. GINNs accelerate the prediction of QoIs essential for simulation-based decision-making where generating sufficient sample data using physics-based models alone is often prohibitively expensive. Using a real-world application grounded in supercapacitor-based energy storage, we describe the construction of GINNs from a Bayesian network-embedded homogenized model for supercapacitor dynamics, and demonstrate their ability to produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
Inverse design arises in a variety of areas in engineering such as acoustic, mechanics, thermal/electronic transport, electromagnetism, and optics. Topology optimization is a major form of inverse design, where we optimize a designed geometry to achieve targeted properties and the geometry is parameterized by a density function. This optimization is challenging, because it has a very high dimensionality and is usually constrained by partial differential equations (PDEs) and additional inequalities. Here, we propose a new deep learning method -- physics-informed neural networks with hard constraints (hPINNs) -- for solving topology optimization. hPINN leverages the recent development of PINNs for solving PDEs, and thus does not rely on any numerical PDE solver. However, all the constraints in PINNs are soft constraints, and hence we impose hard constraints by using the penalty method and the augmented Lagrangian method. We demonstrate the effectiveness of hPINN for a holography problem in optics and a fluid problem of Stokes flow. We achieve the same objective as conventional PDE-constrained optimization methods based on adjoint methods and numerical PDE solvers, but find that the design obtained from hPINN is often simpler and smoother for problems whose solution is not unique. Moreover, the implementation of inverse design with hPINN can be easier than that of conventional methods.