No Arabic abstract
Modeling realistic fluid and plasma flows is computationally intensive, motivating the use of reduced-order models for a variety of scientific and engineering tasks. However, it is challenging to characterize, much less guarantee, the global stability (i.e., long-time boundedness) of these models. The seminal work of Schlegel and Noack (JFM, 2015) provided a theorem outlining necessary and sufficient conditions to ensure global stability in systems with energy-preserving, quadratic nonlinearities, with the goal of evaluating the stability of projection-based models. In this work, we incorporate this theorem into modern data-driven models obtained via machine learning. First, we propose that this theorem should be a standard diagnostic for the stability of projection-based and data-driven models, examining the conditions under which it holds. Second, we illustrate how to modify the objective function in machine learning algorithms to promote globally stable models, with implications for the modeling of fluid and plasma flows. Specifically, we introduce a modified trapping SINDy algorithm based on the sparse identification of nonlinear dynamics (SINDy) method. This method enables the identification of models that, by construction, only produce bounded trajectories. The effectiveness and accuracy of this approach are demonstrated on a broad set of examples of varying model complexity and physical origin, including the vortex shedding in the wake of a circular cylinder.
We present data driven kinematic models for the motion of bubbles in high-Re turbulent fluid flows based on recurrent neural networks with long-short term memory enhancements. The models extend empirical relations, such as Maxey-Riley (MR) and its variants, whose applicability is limited when either the bubble size is large or the flow is very complex. The recurrent neural networks are trained on the trajectories of bubbles obtained by Direct Numerical Simulations (DNS) of the Navier Stokes equations for a two-component incompressible flow model. Long short term memory components exploit the time history of the flow field that the bubbles have encountered along their trajectories and the networks are further augmented by imposing rotational invariance to their structure. We first train and validate the formulated model using DNS data for a turbulent Taylor-Green vortex. Then we examine the model predictive capabilities and its generalization to Reynolds numbers that are different from those of the training data on benchmark problems, including a steady (Hills spherical vortex) and an unsteady (Gaussian vortex ring) flow field. We find that the predictions of the developed model are significantly improved compared with those obtained by the MR equation. Our results indicate that data-driven models with history terms are well suited in capturing the trajectories of bubbles in turbulent flows.
There are two main strategies for improving the projection-based reduced order model (ROM) accuracy: (i) improving the ROM, i.e., adding new terms to the standard ROM; and (ii) improving the ROM basis, i.e., constructing ROM bases that yield more accurate ROMs. In this paper, we use the latter. We propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis.
A nonlocal subgrid-scale stress (SGS) model is developed based on the convolution neural network (CNN), a powerful supervised data-driven approach. The CNN is an ideal approach to naturally consider nonlocal spatial information in prediction due to its wide receptive field. The CNN-based models used here only take primitive flow variables as input, then the flow features are automatically extracted without any $priori$ guidance. The nonlocal models trained by direct numerical simulation (DNS) data of a turbulent channel flow at $Re_{tau}=178$ are accessed in both the $priori$ and $posteriori$ test, providing physically reasonable flow statistics (like mean velocity and velocity fluctuations) closing to the DNS results even when extrapolating to a higher Reynolds number $Re_{tau}=600$. In our model, the backscatter is also predicted well and the numerical simulation is stable. The nonlocal models outperform local data-driven models like artificial neural network and some SGS models, e.g. the Smagorinsky model in actual large eddy simulation (LES). The model is also robust since stable solutions can be obtained when examining the grid resolution from one-half to double of the spatial resolution used in training. We also investigate the influence of receptive fields and suggest using the two-point correlation analysis as a quantitative method to guide the design of nonlocal physical models. To facilitate the combination of machine learning (ML) algorithms to computational fluid dynamics (CFD), a novel heterogeneous ML-CFD framework is proposed. The present study provides the effective data-driven nonlocal methods for SGS modelling in the LES of complex anisotropic turbulent flows.
Generalizability of machine-learning (ML) based turbulence closures to accurately predict unseen practical flows remains an important challenge. It is well recognized that the ML neural network architecture and training protocol profoundly influence the generalizability characteristics. The objective of this work is to identify the unique challenges in finding the ML closure network hyperparameters that arise due to the inherent complexity of turbulence. Three proxy-physics turbulence surrogates of different degrees of complexity (yet significantly simpler than turbulence physics) are employed. The proxy-physics models mimic some of the key features of turbulence and provide training/testing data at low computational expense. The focus is on the following turbulence features: high dimensionality of flow physics parameter space, non-linearity effects and bifurcations in emergent behavior. A standard fully-connected neural network is used to reproduce the data of simplified proxy-physics turbulence surrogates. Lacking a rigorous procedure to find globally optimal ML neural network hyperparameters, a brute-force parameter-space sweep is performed to examine the existence of locally optimal solution. Even for this simple case, it is demonstrated that the choice of the optimal hyperparameters for a fully-connected neural network is not straightforward when it is trained with the partially available data in parameter space. Overall, specific issues to be addressed are identified, and the findings provide a realistic perspective on the utility of ML turbulence closures for practical applications.
A novel machine learning algorithm is presented, serving as a data-driven turbulence modeling tool for Reynolds Averaged Navier-Stokes (RANS) simulations. This machine learning algorithm, called the Tensor Basis Random Forest (TBRF), is used to predict the Reynolds-stress anisotropy tensor, while guaranteeing Galilean invariance by making use of a tensor basis. By modifying a random forest algorithm to accept such a tensor basis, a robust, easy to implement, and easy to train algorithm is created. The algorithm is trained on several flow cases using DNS/LES data, and used to predict the Reynolds stress anisotropy tensor for new, unseen flows. The resulting predictions of turbulence anisotropy are used as a turbulence model within a custom RANS solver. Stabilization of this solver is necessary, and is achieved by a continuation method and a modified $k$-equation. Results are compared to the neural network approach of Ling et al. [J. Fluid Mech, 807(2016):155-166, (2016)]. Results show that the TBRF algorithm is able to accurately predict the anisotropy tensor for various flow cases, with realizable predictions close to the DNS/LES reference data. Corresponding mean flows for a square duct flow case and a backward facing step flow case show good agreement with DNS and experimental data-sets. Overall, these results are seen as a next step towards improved data-driven modelling of turbulence. This creates an opportunity to generate custom turbulence closures for specific classes of flows, limited only by the availability of LES/DNS data.