Do you want to publish a course? Click here

Turbulence closure modeling with data-driven techniques: Existence of generalizable deep neural networks under the assumption of full data

83   0   0.0 ( 0 )
 Added by Salar Taghizadeh
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Generalizability of machine-learning (ML) based turbulence closures to accurately predict unseen practical flows remains an important challenge. It is well recognized that the ML neural network architecture and training protocol profoundly influence the generalizability characteristics. The objective of this work is to identify the unique challenges in finding the ML closure network hyperparameters that arise due to the inherent complexity of turbulence. Three proxy-physics turbulence surrogates of different degrees of complexity (yet significantly simpler than turbulence physics) are employed. The proxy-physics models mimic some of the key features of turbulence and provide training/testing data at low computational expense. The focus is on the following turbulence features: high dimensionality of flow physics parameter space, non-linearity effects and bifurcations in emergent behavior. A standard fully-connected neural network is used to reproduce the data of simplified proxy-physics turbulence surrogates. Lacking a rigorous procedure to find globally optimal ML neural network hyperparameters, a brute-force parameter-space sweep is performed to examine the existence of locally optimal solution. Even for this simple case, it is demonstrated that the choice of the optimal hyperparameters for a fully-connected neural network is not straightforward when it is trained with the partially available data in parameter space. Overall, specific issues to be addressed are identified, and the findings provide a realistic perspective on the utility of ML turbulence closures for practical applications.



rate research

Read More

145 - Yang Liu , Rui Hu , Adam Kraus 2021
Advanced nuclear reactors often exhibit complex thermal-fluid phenomena during transients. To accurately capture such phenomena, a coarse-mesh three-dimensional (3-D) modeling capability is desired for modern nuclear-system code. In the coarse-mesh 3-D modeling of advanced-reactor transients that involve flow and heat transfer, accurately predicting the turbulent viscosity is a challenging task that requires an accurate and computationally efficient model to capture the unresolved fine-scale turbulence. In this paper, we propose a data-driven coarse-mesh turbulence model based on local flow features for the transient analysis of thermal mixing and stratification in a sodium-cooled fast reactor. The model has a coarse-mesh setup to ensure computational efficiency, while it is trained by fine-mesh computational fluid dynamics (CFD) data to ensure accuracy. A novel neural network architecture, combining a densely connected convolutional network and a long-short-term-memory network, is developed that can efficiently learn from the spatial-temporal CFD transient simulation results. The neural network model was trained and optimized on a loss-of-flow transient and demonstrated high accuracy in predicting the turbulent viscosity field during the whole transient. The trained models generalization capability was also investigated on two other transients with different inlet conditions. The study demonstrates the potential of applying the proposed data-driven approach to support the coarse-mesh multi-dimensional modeling of advanced reactors.
There are two main strategies for improving the projection-based reduced order model (ROM) accuracy: (i) improving the ROM, i.e., adding new terms to the standard ROM; and (ii) improving the ROM basis, i.e., constructing ROM bases that yield more accurate ROMs. In this paper, we use the latter. We propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis.
We investigate the capability of neural network-based model order reduction, i.e., autoencoder (AE), for fluid flows. As an example model, an AE which comprises of a convolutional neural network and multi-layer perceptrons is considered in this study. The AE model is assessed with four canonical fluid flows, namely: (1) two-dimensional cylinder wake, (2) its transient process, (3) NOAA sea surface temperature, and (4) $y-z$ sectional field of turbulent channel flow, in terms of a number of latent modes, a choice of nonlinear activation functions, and a number of weights contained in the AE model. We find that the AE models are sensitive against the choice of the aforementioned parameters depending on the target flows. Finally, we foresee the extensional applications and perspectives of machine learning based order reduction for numerical and experimental studies in fluid dynamics community.
Recently, physics-driven deep learning methods have shown particular promise for the prediction of physical fields, especially to reduce the dependency on large amounts of pre-computed training data. In this work, we target the physics-driven learning of complex flow fields with high resolutions. We propose the use of emph{Convolutional neural networks} (CNN) based U-net architectures to efficiently represent and reconstruct the input and output fields, respectively. By introducing Navier-Stokes equations and boundary conditions into loss functions, the physics-driven CNN is designed to predict corresponding steady flow fields directly. In particular, this prevents many of the difficulties associated with approaches employing fully connected neural networks. Several numerical experiments are conducted to investigate the behavior of the CNN approach, and the results indicate that a first-order accuracy has been achieved. Specifically for the case of a flow around a cylinder, different flow regimes can be learned and the adhered twin-vortices are predicted correctly. The numerical results also show that the training for multiple cases is accelerated significantly, especially for the difficult cases at low Reynolds numbers, and when limited reference solutions are used as supplementary learning targets.
We investigate theoretically and numerically the use of the Least-Squares Finite-element method (LSFEM) to approach data-assimilation problems for the steady-state, incompressible Navier-Stokes equations. Our LSFEM discretization is based on a stress-velocity-pressure (S-V-P) first-order formulation, using discrete counterparts of the Sobolev spaces $H({rm div}) times H^1 times L^2$ respectively. Resolution of the system is via minimization of a least-squares functional representing the magnitude of the residual of the equations. A simple and immediate approach to extend this solver to data-assimilation is to add a data-discrepancy term to the functional. Whereas most data-assimilation techniques require a large number of evaluations of the forward-simulations and are therefore very expensive, the approach proposed in this work uniquely has the same cost as a single forward run. However, the question arises: what is the statistical model implied by this choice? We answer this within the Bayesian framework, establishing the latent background covariance model and the likelihood. Further we demonstrate that - in the linear case - the method is equivalent to application of the Kalman filter, and derive the posterior covariance. We practically demonstrate the capabilities of our method on a backward-facing step case. Our LSFEM formulation (without data) is shown to have good approximation quality, even on relatively coarse meshes - in particular with respect to mass-conservation and reattachment location. Adding limited velocity measurements from experiment, we show that the method is able to correct for discretization error on very coarse meshes, as well as correct for the influence of unknown and uncertain boundary-conditions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا