ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Closure Models for Dynamical Systems

104   0   0.0 ( 0 )
 نشر من قبل Abhinav Gupta
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Complex dynamical systems are used for predictions in many domains. Because of computational costs, models are truncated, coarsened, or aggregated. As the neglected and unresolved terms become important, the utility of model predictions diminishes. We develop a novel, versatile, and rigorous methodology to learn non-Markovian closure parameterizations for known-physics/low-fidelity models using data from high-fidelity simulations. The new neural closure models augment low-fidelity models with neural delay differential equations (nDDEs), motivated by the Mori-Zwanzig formulation and the inherent delays in complex dynamical systems. We demonstrate that neural closures efficiently account for truncated modes in reduced-order-models, capture the effects of subgrid-scale processes in coarse models, and augment the simplification of complex biological and physical-biogeochemical models. We find that using non-Markovian over Markovian closures improves long-term prediction accuracy and requires smaller networks. We derive adjoint equations and network architectures needed to efficiently implement the new discrete and distributed nDDEs, for any time-integration schemes and allowing nonuniformly-spaced temporal training data. The performance of discrete over distributed delays in closure models is explained using information theory, and we find an optimal amount of past information for a specified architecture. Finally, we analyze computational complexity and explain the limited additional cost due to neural closure models.



قيم البحث

اقرأ أيضاً

A probabilistic model describes a system in its observational state. In many situations, however, we are interested in the systems response under interventions. The class of structural causal models provides a language that allows us to model the beh aviour under interventions. It can been taken as a starting point to answer a plethora of causal questions, including the identification of causal effects or causal structure learning. In this chapter, we provide a natural and straight-forward extension of this concept to dynamical systems, focusing on continuous time models. In particular, we introduce two types of causal kinetic models that differ in how the randomness enters into the model: it may either be considered as observational noise or as systematic driving noise. In both cases, we define interventions and therefore provide a possible starting point for causal inference. In this sense, the book chapter provides more questions than answers. The focus of the proposed causal kinetic models lies on the dynamics themselves rather than corresponding stationary distributions, for example. We believe that this is beneficial when the aim is to model the full time evolution of the system and data are measured at different time points. Under this focus, it is natural to consider interventions in the differential equations themselves.
91 - Di Qi , John Harlim 2021
We propose a Machine Learning (ML) non-Markovian closure modeling framework for accurate predictions of statistical responses of turbulent dynamical systems subjected to external forcings. One of the difficulties in this statistical closure problem i s the lack of training data, which is a configuration that is not desirable in supervised learning with neural network models. In this study with the 40-dimensional Lorenz-96 model, the shortage of data (in temporal) is due to the stationarity of the statistics beyond the decorrelation time, thus, the only informative content in the training data is on short-time transient statistics. We adopted a unified closure framework on various truncation regimes, including and excluding the detailed dynamical equations for the variances. The closure frameworks employ a Long-Short-Term-Memory architecture to represent the higher-order unresolved statistical feedbacks with careful consideration to account for the intrinsic instability yet producing stable long-time predictions. We found that this unified agnostic ML approach performs well under various truncation scenarios. Numerically, the ML closure model can accurately predict the long-time statistical responses subjected to various time-dependent external forces that are not (and maximum forcing amplitudes that are relatively larger than those) in the training dataset.
We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets), to conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations. At the heart o f our algorithm is a novel neural network architecture consisting of two sub-networks. Both are embedded with terms in the form of Taylor series expansion designed with symmetric structure. The key mechanism underpinning our infrastructure is the strong expressiveness and special symmetric property of the Taylor series expansion, which naturally accommodate the numerical fitting process of the gradients of the Hamiltonian with respect to the generalized coordinates as well as preserve its symplectic structure. We further incorporate a fourth-order symplectic integrator in conjunction with neural ODEs framework into our Taylor-net architecture to learn the continuous-time evolution of the target systems while simultaneously preserving their symplectic structures. We demonstrated the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the Henon--Heiles systems. Our model exhibits unique computational merits by outperforming previous methods to a great extent regarding the prediction accuracy, the convergence rate, and the robustness despite using extremely small training data with a short training period (6000 times shorter than the predicting period), small sample sizes, and no intermediate data to train the networks.
466 - Zhiqi Bu , Shiyun Xu , Kan Chen 2020
When equipped with efficient optimization algorithms, the over-parameterized neural networks have demonstrated high level of performance even though the loss function is non-convex and non-smooth. While many works have been focusing on understanding the loss dynamics by training neural networks with the gradient descent (GD), in this work, we consider a broad class of optimization algorithms that are commonly used in practice. For example, we show from a dynamical system perspective that the Heavy Ball (HB) method can converge to global minimum on mean squared error (MSE) at a linear rate (similar to GD); however, the Nesterov accelerated gradient descent (NAG) may only converges to global minimum sublinearly. Our results rely on the connection between neural tangent kernel (NTK) and finite over-parameterized neural networks with ReLU activation, which leads to analyzing the limiting ordinary differential equations (ODE) for optimization algorithms. We show that, optimizing the non-convex loss over the weights corresponds to optimizing some strongly convex loss over the prediction error. As a consequence, we can leverage the classical convex optimization theory to understand the convergence behavior of neural networks. We believe our approach can also be extended to other optimization algorithms and network architectures.
In this paper we advance physical background of the energy- and flux-budget turbulence closure based on the budget equations for the turbulent kinetic and potential energies and turbulent fluxes of momentum and buoyancy, and a new relaxation equation for the turbulent dissipation time-scale. The closure is designed for stratified geophysical flows from neutral to very stable and accounts for the Earth rotation. In accordance to modern experimental evidence, the closure implies maintaining of turbulence by the velocity shear at any gradient Richardson number Ri, and distinguishes between the two principally different regimes: strong turbulence at Ri << 1 typical of boundary-layer flows and characterised by the practically constant turbulent Prandtl number; and weak turbulence at Ri > 1 typical of the free atmosphere or deep ocean, where the turbulent Prandtl number asymptotically linearly increases with increasing Ri (which implies very strong suppression of the heat transfer compared to the momentum transfer). For use in different applications, the closure is formulated at different levels of complexity, from the local algebraic model relevant to the steady-state regime of turbulence to a hierarchy of non-local closures including simpler down-gradient models, presented in terms of the eddy-viscosity and eddy-conductivity, and general non-gradient model based on prognostic equations for all basic parameters of turbulence including turbulent fluxes.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا