No Arabic abstract
There is a growing interest in developing data-driven subgrid-scale (SGS) models for large-eddy simulation (LES) using machine learning (ML). In a priori (offline) tests, some recent studies have found ML-based data-driven SGS models that are trained on high-fidelity data (e.g., from direct numerical simulation, DNS) to outperform baseline physics-based models and accurately capture the inter-scale transfers, both forward (diffusion) and backscatter. While promising, instabilities in a posteriori (online) tests and inabilities to generalize to a different flow (e.g., with a higher Reynolds number, Re) remain as major obstacles in broadening the applications of such data-driven SGS models. For example, many of the same aforementioned studies have found instabilities that required often ad-hoc remedies to stabilize the LES at the expense of reducing accuracy. Here, using 2D decaying turbulence as the testbed, we show that deep fully convolutional neural networks (CNNs) can accurately predict the SGS forcing terms and the inter-scale transfers in a priori tests, and if trained with enough samples, lead to stable and accurate a posteriori LES-CNN. Further analysis attributes these instabilities to the disproportionally lower accuracy of the CNNs in capturing backscattering when the training set is small. We also show that transfer learning, which involves re-training the CNN with a small amount of data (e.g., 1%) from the new flow, enables accurate and stable a posteriori LES-CNN for flows with 16x higher Re (as well as higher grid resolution if needed). These results show the promise of CNNs with transfer learning to provide stable, accurate, and generalizable LES for practical use.
Advanced nuclear reactors often exhibit complex thermal-fluid phenomena during transients. To accurately capture such phenomena, a coarse-mesh three-dimensional (3-D) modeling capability is desired for modern nuclear-system code. In the coarse-mesh 3-D modeling of advanced-reactor transients that involve flow and heat transfer, accurately predicting the turbulent viscosity is a challenging task that requires an accurate and computationally efficient model to capture the unresolved fine-scale turbulence. In this paper, we propose a data-driven coarse-mesh turbulence model based on local flow features for the transient analysis of thermal mixing and stratification in a sodium-cooled fast reactor. The model has a coarse-mesh setup to ensure computational efficiency, while it is trained by fine-mesh computational fluid dynamics (CFD) data to ensure accuracy. A novel neural network architecture, combining a densely connected convolutional network and a long-short-term-memory network, is developed that can efficiently learn from the spatial-temporal CFD transient simulation results. The neural network model was trained and optimized on a loss-of-flow transient and demonstrated high accuracy in predicting the turbulent viscosity field during the whole transient. The trained models generalization capability was also investigated on two other transients with different inlet conditions. The study demonstrates the potential of applying the proposed data-driven approach to support the coarse-mesh multi-dimensional modeling of advanced reactors.
We propose a new model of turbulence for use in large-eddy simulations (LES). The turbulent force, represented here by the turbulent Lamb vector, is divided in two contributions. The contribution including only subfilter fields is deterministically modeled through a classical eddy-viscosity. The other contribution including both filtered and subfilter scales is dynamically computed as solution of a generalized (stochastic) Langevin equation. This equation is derived using Rapid Distortion Theory (RDT) applied to the subfilter scales. The general friction operator therefore includes both advection and stretching by the resolved scale. The stochastic noise is derived as the sum of a contribution from the energy cascade and a contribution from the pressure. The LES model is thus made of an equation for the resolved scale, including the turbulent force, and a generalized Langevin equation integrated on a twice-finer grid. The model is validated by comparison to DNS and is tested against classical LES models for isotropic homogeneous turbulence, based on eddy viscosity. We show that even in this situation, where no walls are present, our inclusion of backscatter through the Langevin equation results in a better description of the flow.
Results of direct numerical simulation of isotropic turbulence of surface gravity waves in the framework of Hamiltonian equations are presented. For the first time simultaneous formation of both direct and inverse cascades was observed in the framework of primordial dynamical equations. At the same time, strong long waves background was developed. It was shown, that obtained Kolmogorov spectra are very sensitive to the presence of this condensate. Such situation has to be typical for experimental wave tanks, flumes, and small lakes.
Recently, physics-driven deep learning methods have shown particular promise for the prediction of physical fields, especially to reduce the dependency on large amounts of pre-computed training data. In this work, we target the physics-driven learning of complex flow fields with high resolutions. We propose the use of emph{Convolutional neural networks} (CNN) based U-net architectures to efficiently represent and reconstruct the input and output fields, respectively. By introducing Navier-Stokes equations and boundary conditions into loss functions, the physics-driven CNN is designed to predict corresponding steady flow fields directly. In particular, this prevents many of the difficulties associated with approaches employing fully connected neural networks. Several numerical experiments are conducted to investigate the behavior of the CNN approach, and the results indicate that a first-order accuracy has been achieved. Specifically for the case of a flow around a cylinder, different flow regimes can be learned and the adhered twin-vortices are predicted correctly. The numerical results also show that the training for multiple cases is accelerated significantly, especially for the difficult cases at low Reynolds numbers, and when limited reference solutions are used as supplementary learning targets.
Modelling the near-wall region of wall-bounded turbulent flows is a widespread practice to reduce the computational cost of large-eddy simulations (LESs) at high Reynolds number. As a first step towards a data-driven wall-model, a neural-network-based approach to predict the near-wall behaviour in a turbulent open channel flow is investigated. The fully-convolutional network (FCN) proposed by Guastoni et al. [preprint, arXiv:2006.12483] is trained to predict the two-dimensional velocity-fluctuation fields at $y^{+}_{rm target}$, using the sampled fluctuations in wall-parallel planes located farther from the wall, at $y^{+}_{rm input}$. The data for training and testing is obtained from a direct numerical simulation (DNS) at friction Reynolds numbers $Re_{tau} = 180$ and $550$. The turbulent velocity-fluctuation fields are sampled at various wall-normal locations, i.e. $y^{+} = {15, 30, 50, 80, 100, 120, 150}$. At $Re_{tau}=550$, the FCN can take advantage of the self-similarity in the logarithmic region of the flow and predict the velocity-fluctuation fields at $y^{+} = 50$ using the velocity-fluctuation fields at $y^{+} = 100$ as input with less than 20% error in prediction of streamwise-fluctuations intensity. These results are an encouraging starting point to develop a neural-network based approach for modelling turbulence at the wall in numerical simulations.