ترغب بنشر مسار تعليمي؟ اضغط هنا

Dimension Reduced Turbulent Flow Data From Deep Vector Quantizers

595   0   0.0 ( 0 )
 نشر من قبل Andrew Bragg
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Analyzing large-scale data from simulations of turbulent flows is memory intensive, requiring significant resources. This major challenge highlights the need for data compression techniques. In this study, we apply a physics-informed Deep Learning technique based on vector quantization to generate a discrete, low-dimensional representation of data from simulations of three-dimensional turbulent flows. The deep learning framework is composed of convolutional layers and incorporates physical constraints on the flow, such as preserving incompressibility and global statistical characteristics of the velocity gradients. The accuracy of the model is assessed using statistical, comparison-based similarity and physics-based metrics. The training data set is produced from Direct Numerical Simulation of an incompressible, statistically stationary, isotropic turbulent flow. The performance of this lossy data compression scheme is evaluated not only with unseen data from the stationary, isotropic turbulent flow, but also with data from decaying isotropic turbulence, a Taylor-Green vortex flow, and a turbulent channel flow. Defining the compression ratio (CR) as the ratio of original data size to the compressed one, the results show that our model based on vector quantization can offer CR$=85$ with a mean square error (MSE) of $O(10^{-3})$, and predictions that faithfully reproduce the statistics of the flow, except at the very smallest scales where there is some loss. Compared to the recent study of Glaws. et. al. (Physical Review Fluids, 5(11):114602, 2020), which was based on a conventional autoencoder (where compression is performed in a continuous space), our model improves the CR by more than $30$ percent...



قيم البحث

اقرأ أيضاً

Turbulence modeling is a classical approach to address the multiscale nature of fluid turbulence. Instead of resolving all scales of motion, which is currently mathematically and numerically intractable, reduced models that capture the large-scale be havior are derived. One of the most popular reduced models is the Reynolds averaged Navier-Stokes (RANS) equations. The goal is to solve the RANS equations for the mean velocity and pressure field. However, the RANS equations contain a term called the Reynolds stress tensor, which is not known in terms of the mean velocity field. Many RANS turbulence models have been proposed to model the Reynolds stress tensor in terms of the mean velocity field, but are usually not suitably general for all flow fields of interest. Data-driven turbulence models have recently garnered considerable attention and have been rapidly developed. In a seminal work, Ling et al (2016) developed the tensor basis neural network (TBNN), which was used to learn a general Galilean invariant model for the Reynolds stress tensor. The TBNN was applied to a variety of flow fields with encouraging results. In the present study, the TBNN is applied to the turbulent channel flow. Its performance is compared with classical turbulence models as well as a neural network model that does not preserve Galilean invariance. A sensitivity study on the TBNN reveals that the network attempts to adjust to the dataset, but is limited by the mathematical form that guarantees Galilean invariance.
The ultimate goal of a sound theory of turbulence in fluids is to close in a rational way the Reynolds equations, namely to express the time averaged turbulent stress tensor as a function of the time averaged velocity field. This closure problem is a deep and unsolved problem of statistical physics whose solution requires to go beyond the assumption of a homogeneous and isotropic state, as fluctuations in turbulent flows are strongly related to the geometry of this flow. This links the dissipation to the space dependence of the average velocity field. Based on the idea that dissipation in fully developed turbulence is by singular events resulting from an evolution described by the Euler equations, it has been recently observed that the closure problem is strongly restricted, and that it implies that the turbulent stress is a non local function (in space) of the average velocity field, an extension of classical Boussinesq theory of turbulent viscosity. The resulting equations for the turbulent stress are derived here in one of the simplest possible physical situation, the turbulent Poiseuille flow between two parallel plates. In this case the integral kernel giving the turbulent stress, as function of the averaged velocity field, takes a simple form leading to a full analysis of the averaged turbulent flow in the limit of a very large Reynolds number. In this limit one has to match a viscous boundary layer, near the walls bounding the flow, and an outer solution in the bulk of the flow. This asymptotic analysis is non trivial because one has to match solution with logarithms. A non trivial and somewhat unexpected feature of this solution is that, besides the boundary layers close to the walls, there is another inner boundary layer near the center plane of the flow.
We investigate the applicability of machine learning based reduced order model (ML-ROM) to three-dimensional complex flows. As an example, we consider a turbulent channel flow at the friction Reynolds number of $Re_tau=110$ in a minimum domain which can maintain coherent structures of turbulence. Training data set are prepared by direct numerical simulation (DNS). The present ML-ROM is constructed by combining a three-dimensional convolutional neural network autoencoder (CNN-AE) and a long short-term memory (LSTM). The CNN-AE works to map high-dimensional flow fields into a low-dimensional latent space. The LSTM is then utilized to predict a temporal evolution of the latent vectors obtained by the CNN-AE. The combination of CNN-AE and LSTM can represent the spatio-temporal high-dimensional dynamics of flow fields by only integrating the temporal evolution of the low-dimensional latent dynamics. The turbulent flow fields reproduced by the present ML-ROM show statistical agreement with the reference DNS data in time-ensemble sense, which can also be found through an orbit-based analysis. Influences of the population of vortical structures contained in the domain and the time interval used for temporal prediction on the ML- ROM performance are also investigated. The potential and limitation of the present ML-ROM for turbulence analysis are discussed at the end of our presentation.
This paper proposes a deep-learning based generalized reduced-order model (ROM) that can provide a fast and accurate prediction of the glottal flow during normal phonation. The approach is based on the assumption that the vibration of the vocal folds can be represented by a universal kinematics equation (UKE), which is used to generate a glottal shape library. For each shape in the library, the ground truth values of the flow rate and pressure distribution are obtained from the high-fidelity Navier-Stokes (N-S) solution. A fully-connected deep neural network (DNN)is then trained to build the empirical mapping between the shapes and the flow rate and pressure distributions. The obtained DNN based reduced-order flow solver is coupled with a finite-element method (FEM) based solid dynamics solver for FSI simulation of phonation. The reduced-order model is evaluated by comparing to the Navier-Stokes solutions in both statics glottal shaps and FSI simulations. The results demonstrate a good prediction performance in accuracy and efficiency.
Within the domain of Computational Fluid Dynamics, Direct Numerical Simulation (DNS) is used to obtain highly accurate numerical solutions for fluid flows. However, this approach for numerically solving the Navier-Stokes equations is extremely comput ationally expensive mostly due to the requirement of greatly refined grids. Large Eddy Simulation (LES) presents a more computationally efficient approach for solving fluid flows on lower-resolution (LR) grids but results in an overall reduction in solution fidelity. Through this paper, we introduce a novel deep learning framework SR-DNS Net, which aims to mitigate this inherent trade-off between solution fidelity and computational complexity by leveraging deep learning techniques used in image super-resolution. Using our model, we wish to learn the mapping from a coarser LR solution to a refined high-resolution (HR) DNS solution so as to eliminate the need for performing DNS on highly refined grids. Our model efficiently reconstructs the high-fidelity DNS data from the LES like low-resolution solutions while yielding good reconstruction metrics. Thus our implementation improves the solution accuracy of LR solutions while incurring only a marginal increase in computational cost required for deploying the trained deep learning model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا