ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning-Based Statistical Closure Models for Turbulent Dynamical Systems

92   0   0.0 ( 0 )
 نشر من قبل John Harlim
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a Machine Learning (ML) non-Markovian closure modeling framework for accurate predictions of statistical responses of turbulent dynamical systems subjected to external forcings. One of the difficulties in this statistical closure problem is the lack of training data, which is a configuration that is not desirable in supervised learning with neural network models. In this study with the 40-dimensional Lorenz-96 model, the shortage of data (in temporal) is due to the stationarity of the statistics beyond the decorrelation time, thus, the only informative content in the training data is on short-time transient statistics. We adopted a unified closure framework on various truncation regimes, including and excluding the detailed dynamical equations for the variances. The closure frameworks employ a Long-Short-Term-Memory architecture to represent the higher-order unresolved statistical feedbacks with careful consideration to account for the intrinsic instability yet producing stable long-time predictions. We found that this unified agnostic ML approach performs well under various truncation scenarios. Numerically, the ML closure model can accurately predict the long-time statistical responses subjected to various time-dependent external forces that are not (and maximum forcing amplitudes that are relatively larger than those) in the training dataset.



قيم البحث

اقرأ أيضاً

Complex dynamical systems are used for predictions in many domains. Because of computational costs, models are truncated, coarsened, or aggregated. As the neglected and unresolved terms become important, the utility of model predictions diminishes. W e develop a novel, versatile, and rigorous methodology to learn non-Markovian closure parameterizations for known-physics/low-fidelity models using data from high-fidelity simulations. The new neural closure models augment low-fidelity models with neural delay differential equations (nDDEs), motivated by the Mori-Zwanzig formulation and the inherent delays in complex dynamical systems. We demonstrate that neural closures efficiently account for truncated modes in reduced-order-models, capture the effects of subgrid-scale processes in coarse models, and augment the simplification of complex biological and physical-biogeochemical models. We find that using non-Markovian over Markovian closures improves long-term prediction accuracy and requires smaller networks. We derive adjoint equations and network architectures needed to efficiently implement the new discrete and distributed nDDEs, for any time-integration schemes and allowing nonuniformly-spaced temporal training data. The performance of discrete over distributed delays in closure models is explained using information theory, and we find an optimal amount of past information for a specified architecture. Finally, we analyze computational complexity and explain the limited additional cost due to neural closure models.
We derive a hierarchy of closures based on perturbations of well-known entropy-based closures; we therefore refer to them as perturbed entropy-based models. Our derivation reveals final equations containing an additional convective and diffusive term which are added to the flux term of the standard closure. We present numerical simulations for the simplest member of the hierarchy, the perturbed M1 or PM1 model, in one spatial dimension. Simulations are performed using a Runge-Kutta discontinuous Galerkin method with special limiters that guarantee the realizability of the moment variables and the positivity of the material temperature. Improvements to the standard M1 model are observed in cases where unphysical shocks develop in the M1 model.
The study of topological properties by machine learning approaches has attracted considerable interest recently. Here we propose machine learning the topological invariants that are unique in non-Hermitian systems. Specifically, we train neural netwo rks to predict the winding of eigenvalues of four prototypical non-Hermitian Hamiltonians on the complex energy plane with nearly $100%$ accuracy. Our demonstrations in the non-Hermitian Hatano-Nelson model, Su-Schrieffer-Heeger model and generalized Aubry-Andre-Harper model in one dimension, and two-dimensional Dirac fermion model with non-Hermitian terms show the capability of the neural networks in exploring topological invariants and the associated topological phase transitions and topological phase diagrams in non-Hermitian systems. Moreover, the neural networks trained by a small data set in the phase diagram can successfully predict topological invariants in untouched phase regions. Thus, our work paves the way to revealing non-Hermitian topology with the machine learning toolbox.
74 - Haiyan Chen , Yue Zeng , Yi Li 2020
The secondary Bjerknes force plays a significant role in the evolution of bubble clusters. However, due to the complex dependence of the force on multiple parameters, it is highly non-trivial to include the effects of this force in the simulations of bubble clusters. In this paper, machine learning is used to develop a data-driven model for the secondary Bjerknes force between two insonated bubbles as a function of the equilibrium radii of the bubbles, the distance between the bubbles, the amplitude and the frequency of the pressure. The force varies over several orders of magnitude, which poses a serious challenge for the usual machine learning models. To overcome this difficulty, the magnitudes and the signs of the force are separated and modelled separately. A nonlinear regression is obtained with a feed-forward network model for the logarithm of the magnitude, whereas the sign is modelled by a support-vector machine model. The principle, the practical aspects related to the training and validation of the machine models are introduced. The predictions from the models are checked against the values computed from the Keller-Miksis equations. The results show that the models are extremely efficient while providing accurate estimate of the force. The models make it computationally feasible for the future simulations of the bubble clusters to include the effects of the secondary Bjerknes force.
Machine learning surrogate models for quantum mechanical simulations has enabled the field to efficiently and accurately study material and molecular systems. Developed models typically rely on a substantial amount of data to make reliable prediction s of the potential energy landscape or careful active learning and uncertainty estimates. When starting with small datasets, convergence of active learning approaches is a major outstanding challenge which limited most demonstrations to online active learning. In this work we demonstrate a $Delta$-machine learning approach that enables stable convergence in offline active learning strategies by avoiding unphysical configurations. We demonstrate our frameworks capabilities on a structural relaxation, transition state calculation, and molecular dynamics simulation, with the number of first principle calculations being cut down anywhere from 70-90%. The approach is incorporated and developed alongside AMPtorch, an open-source machine learning potential package, along with interactive Google Colab notebook examples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا