ترغب بنشر مسار تعليمي؟ اضغط هنا

GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems

141   0   0.0 ( 0 )
 نشر من قبل Yeonjong Shin
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose the GENERIC formalism informed neural networks (GFINNs) that obey the symmetric degeneracy conditions of the GENERIC formalism. GFINNs comprise two modules, each of which contains two components. We model each component using a neural network whose architecture is designed to satisfy the required conditions. The component-wise architecture design provides flexible ways of leveraging available physics information into neural networks. We prove theoretically that GFINNs are sufficiently expressive to learn the underlying equations, hence establishing the universal approximation theorem. We demonstrate the performance of GFINNs in three simulation problems: gas containers exchanging heat and volume, thermoelastic double pendulum and the Langevin dynamics. In all the examples, GFINNs outperform existing methods, hence demonstrating good accuracy in predictions for both deterministic and stochastic systems.



قيم البحث

اقرأ أيضاً

91 - Akshunna S. Dogra 2020
Neural Networks (NNs) have been identified as a potentially powerful tool in the study of complex dynamical systems. A good example is the NN differential equation (DE) solver, which provides closed form, differentiable, functional approximations for the evolution of a wide variety of dynamical systems. A major disadvantage of such NN solvers can be the amount of computational resources needed to achieve accuracy comparable to existing numerical solvers. We present new strategies for existing dynamical system NN DE solvers, making efficient use of the textit{learnt} information, to speed up their training process, while still pursuing a completely unsupervised approach. We establish a fundamental connection between NN theory and dynamical systems theory via Koopman Operator Theory (KOT), by showing that the usual training processes for Neural Nets are fertile ground for identifying multiple Koopman operators of interest. We end by illuminating certain applications that KOT might have for NNs in general.
Effective inclusion of physics-based knowledge into deep neural network models of dynamical systems can greatly improve data efficiency and generalization. Such a-priori knowledge might arise from physical principles (e.g., conservation laws) or from the systems design (e.g., the Jacobian matrix of a robot), even if large portions of the system dynamics remain unknown. We develop a framework to learn dynamics models from trajectory data while incorporating a-priori system knowledge as inductive bias. More specifically, the proposed framework uses physics-based side information to inform the structure of the neural network itself, and to place constraints on the values of the outputs and the internal states of the model. It represents the systems vector field as a composition of known and unknown functions, the latter of which are parametrized by neural networks. The physics-informed constraints are enforced via the augmented Lagrangian method during the models training. We experimentally demonstrate the benefits of the proposed approach on a variety of dynamical systems -- including a benchmark suite of robotics environments featuring large state spaces, non-linear dynamics, external forces, contact forces, and control inputs. By exploiting a-priori system knowledge during training, the proposed approach learns to predict the system dynamics two orders of magnitude more accurately than a baseline approach that does not include prior knowledge, given the same training dataset.
Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for whi ch there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
192 - Lee DeVille , Eugene Lerman 2013
We propose a new framework for the study of continuous time dynamical systems on networks. We view such dynamical systems as collections of interacting control systems. We show that a class of maps between graphs called graph fibrations give rise to maps between dynamical systems on networks. This allows us to produce conjugacy between dynamical systems out of combinatorial data. In particular we show that surjective graph fibrations lead to synchrony subspaces in networks. The injective graph fibrations, on the other hand, give rise to surjective maps from large dynamical systems to smaller ones. One can view these surjections as a kind of fast/slow variable decompositions or as abstractions in the computer science sense of the word.
Despite the significant progress over the last 50 years in simulating flow problems using numerical discretization of the Navier-Stokes equations (NSE), we still cannot incorporate seamlessly noisy data into existing algorithms, mesh-generation is co mplex, and we cannot tackle high-dimensional problems governed by parametrized NSE. Moreover, solving inverse flow problems is often prohibitively expensive and requires complex and expensive formulations and new computer codes. Here, we review flow physics-informed learning, integrating seamlessly data and mathematical models, and implementing them using physics-informed neural networks (PINNs). We demonstrate the effectiveness of PINNs for inverse problems related to three-dimensional wake flows, supersonic flows, and biomedical flows.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا