Do you want to publish a course? Click here

Towards high-accuracy deep learning inference of compressible turbulent flows over aerofoils

202   0   0.0 ( 0 )
 Added by Liwei Chen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The present study investigates the accurate inference of Reynolds-averaged Navier-Stokes solutions for the compressible flow over aerofoils in two dimensions with a deep neural network. Our approach yields networks that learn to generate precise flow fields for varying body-fitted, structured grids by providing them with an encoding of the corresponding mapping to a canonical space for the solutions. We apply the deep neural network model to a benchmark case of incompressible flow at randomly given angles of attack and Reynolds numbers and achieve an improvement of more than an order of magnitude compared to previous work. Further, for transonic flow cases, the deep neural network model accurately predicts complex flow behaviour at high Reynolds numbers, such as shock wave/boundary layer interaction, and quantitative distributions like pressure coefficient, skin friction coefficient as well as wake total pressure profiles downstream of aerofoils. The proposed deep learning method significantly speeds up the predictions of flow fields and shows promise for enabling fast aerodynamic designs.



rate research

Read More

Within the domain of Computational Fluid Dynamics, Direct Numerical Simulation (DNS) is used to obtain highly accurate numerical solutions for fluid flows. However, this approach for numerically solving the Navier-Stokes equations is extremely computationally expensive mostly due to the requirement of greatly refined grids. Large Eddy Simulation (LES) presents a more computationally efficient approach for solving fluid flows on lower-resolution (LR) grids but results in an overall reduction in solution fidelity. Through this paper, we introduce a novel deep learning framework SR-DNS Net, which aims to mitigate this inherent trade-off between solution fidelity and computational complexity by leveraging deep learning techniques used in image super-resolution. Using our model, we wish to learn the mapping from a coarser LR solution to a refined high-resolution (HR) DNS solution so as to eliminate the need for performing DNS on highly refined grids. Our model efficiently reconstructs the high-fidelity DNS data from the LES like low-resolution solutions while yielding good reconstruction metrics. Thus our implementation improves the solution accuracy of LR solutions while incurring only a marginal increase in computational cost required for deploying the trained deep learning model.
509 - Guiyu Cao , Kun Xu , Liang Pan 2021
In this paper, a high-order gas-kinetic scheme in general curvilinear coordinate (HGKS-cur) is developed for the numerical simulation of compressible turbulence. Based on the coordinate transformation, the Bhatnagar-Gross-Krook (BGK) equation is transformed from physical space to computational space. To deal with the general mesh given by discretized points, the geometrical metrics need to be constructed by the dimension-by-dimension Lagrangian interpolation. The multidimensional weighted essentially non-oscillatory (WENO) reconstruction is adopted in the computational domain for spatial accuracy, where the reconstructed variables are the cell averaged Jacobian and the Jacobian-weighted conservative variables. The two-stage fourth-order method, which was developed for spatial-temporal coupled flow solvers, is used for temporal discretization. The numerical examples for inviscid and laminar flows validate the accuracy and geometrical conservation law of HGKS-cur. As a direct application, HGKS-cur is implemented for the implicit large eddy simulation (iLES) in compressible wall-bounded turbulent flows, including the compressible turbulent channel flow and compressible turbulent flow over periodic hills. The iLES results with HGKS-cur are in good agreement with the refereed spectral methods and high-order finite volume methods. The performance of HGKS-cur demonstrates its capability as a powerful tool for the numerical simulation of compressible wall-bounded turbulent flows and massively separated flows.
Scale-space energy density function, $E(mathbf{x}, mathbf{r})$, is defined as the derivative of the two-point velocity correlation. The function E describes the turbulent kinetic energy density of scale r at a location x and can be considered as the generalization of spectral energy density function concept to inhomogeneous flows. We derive the transport equation for the scale-space energy density function in compressible flows to develop a better understanding of scale-to-scale energy transfer and the degree of non-locality of the energy interactions. Specifically, the effects of variable-density and dilatation on turbulence energy dynamics are identified. It is expected that these findings will yield deeper insight into compressibility effects leading to improved models at all levels of closure for mass flux, density-variance, pressure-dilatation, pressure-strain correlation and dilatational dissipation processes.
Turbulence modeling is a classical approach to address the multiscale nature of fluid turbulence. Instead of resolving all scales of motion, which is currently mathematically and numerically intractable, reduced models that capture the large-scale behavior are derived. One of the most popular reduced models is the Reynolds averaged Navier-Stokes (RANS) equations. The goal is to solve the RANS equations for the mean velocity and pressure field. However, the RANS equations contain a term called the Reynolds stress tensor, which is not known in terms of the mean velocity field. Many RANS turbulence models have been proposed to model the Reynolds stress tensor in terms of the mean velocity field, but are usually not suitably general for all flow fields of interest. Data-driven turbulence models have recently garnered considerable attention and have been rapidly developed. In a seminal work, Ling et al (2016) developed the tensor basis neural network (TBNN), which was used to learn a general Galilean invariant model for the Reynolds stress tensor. The TBNN was applied to a variety of flow fields with encouraging results. In the present study, the TBNN is applied to the turbulent channel flow. Its performance is compared with classical turbulence models as well as a neural network model that does not preserve Galilean invariance. A sensitivity study on the TBNN reveals that the network attempts to adjust to the dataset, but is limited by the mathematical form that guarantees Galilean invariance.
We study numerically joint mixing of salt and colloids by a chaotic velocity field $mathbf{V}$, and how salt inhomogeneities accelerate or delay colloid mixing by inducing a velocity drift $mathbf{V}_{rm dp}$ between colloids and fluid particles as proposed in recent experiments cite{Deseigne2013}. We demonstrate that because the drift velocity is no longer divergence free, small variations to the total velocity field drastically affect the evolution of colloid variance $sigma^2=langle C^2 rangle - langle C rangle^2$. A consequence is that mixing strongly depends on the mutual coherence between colloid and salt concentration fields, the short time evolution of scalar variance being governed by a new variance production term $P=- langle C^2 abla cdot mathbf{V}_{rm dp} rangle/2$ when scalar gradients are not developed yet so that dissipation is weak. Depending on initial conditions, mixing is then delayed or enhanced, and it is possible to find examples for which the two regimes (fast mixing followed by slow mixing) are observed consecutively when the variance source term reverses its sign. This is indeed the case for localized patches modeled as gaussian concentration profiles.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا