Do you want to publish a course? Click here

Assessment of supervised machine learning methods for fluid flows

147   0   0.0 ( 0 )
 Added by Kai Fukami
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We apply supervised machine learning techniques to a number of regression problems in fluid dynamics. Four machine learning architectures are examined in terms of their characteristics, accuracy, computational cost, and robustness for canonical flow problems. We consider the estimation of force coefficients and wakes from a limited number of sensors on the surface for flows over a cylinder and NACA0012 airfoil with a Gurney flap. The influence of the temporal density of the training data is also examined. Furthermore, we consider the use of convolutional neural network in the context of super-resolution analysis of two-dimensional cylinder wake, two-dimensional decaying isotropic turbulence, and three-dimensional turbulent channel flow. In the concluding remarks, we summarize on findings from a range of regression type problems considered herein.



rate research

Read More

We present a new turbulent data reconstruction method with supervised machine learning techniques inspired by super resolution and inbetweening, which can recover high-resolution turbulent flows from grossly coarse flow data in space and time. For the present machine learning based data reconstruction, we use the downsampled skip-connection/multi-scale model based on a convolutional neural network to incorporate the multi-scale nature of fluid flows into its network structure. As an initial example, the model is applied to a two-dimensional cylinder wake at $Re_D$ = 100. The reconstructed flow fields by the proposed method show great agreement with the reference data obtained by direct numerical simulation. Next, we examine the capability of the proposed model for a two-dimensional decaying homogeneous isotropic turbulence. The machine-learned models can follow the decaying evolution from coarse input data in space and time, according to the assessment with the turbulence statistics. The proposed concept is further investigated for a complex turbulent channel flow over a three-dimensional domain at $Re_{tau}$ =180. The present model can reconstruct high-resolved turbulent flows from very coarse input data in space, and it can also reproduce the temporal evolution when the time interval is appropriately chosen. The dependence on the amount of training snapshots and duration between the first and last frames based on a temporal two-point correlation coefficient are also assessed to reveal the capability and robustness of spatio-temporal super resolution reconstruction. These results suggest that the present method can meet a range of flow reconstructions for supporting computational and experimental efforts.
In recent years, there have been a surge in applications of neural networks (NNs) in physical sciences. Although various algorithmic advances have been proposed, there are, thus far, limited number of studies that assess the interpretability of neural networks. This has contributed to the hasty characterization of most NN methods as black boxes and hindering wider acceptance of more powerful machine learning algorithms for physics. In an effort to address such issues in fluid flow modeling, we use a probabilistic neural network (PNN) that provide confidence intervals for its predictions in a computationally effective manner. The model is first assessed considering the estimation of proper orthogonal decomposition (POD) coefficients from local sensor measurements of solution of the shallow water equation. We find that the present model outperforms a well-known linear method with regard to estimation. This model is then applied to the estimation of the temporal evolution of POD coefficients with considering the wake of a NACA0012 airfoil with a Gurney flap and the NOAA sea surface temperature. The present model can accurately estimate the POD coefficients over time in addition to providing confidence intervals thereby quantifying the uncertainty in the output given a particular training data set.
An autoencoder is used to compress and then reconstruct three-dimensional stratified turbulence data in order to better understand fluid dynamics by studying the errors in the reconstruction. The original single data set is resolved on approximately $6.9times10^{10}$ grid points, and 15 fluid variables in three spatial dimensions are used, for a total of about $10^{12}$ input quantities in three dimensions. The objective is to understand which of the input variables contains the most relevant information about the local turbulence regimes in stably stratified turbulence (SST). This is accomplished by observing flow features that appear in one input variable but then `bleed over to multiple output variables. The bleed over is shown to be robust with respect to the number of layers in the autoencoder. In this proof of concept, the errors in the reconstruction include information about the spatial variation of vertical velocity in most of the components of the reconstructed rate-of-strain tensor and density gradient, which suggests that vertical velocity is an important marker for turbulence features of interest in SST. This result is consistent with what fluid dynamicists already understand about SST and, therefore, suggests an approach to understanding turbulence based on more detailed analyses of the reconstruction on errors in an autoencoding algorithm.
A two-fluid Discrete Boltzmann Model(DBM) for compressible flows based on Ellipsoidal Statistical Bhatnagar-Gross-Krook(ES-BGK) is presented. The model has flexible Prandtl number or specific heat ratio. Mathematically, the model is composed of two coupled Discrete Boltzmann Equations(DBE). Each DBE describes one component of the fluid. Physically, the model is equivalent to a macroscopic fluid model based on Navier-Stokes(NS) equations, and supplemented by a coarse-grained model for thermodynamic non-equilibrium behaviors. To obtain a flexible Prandtl number, a coefficient is introduced in the ellipsoidal statistical distribution function to control the viscosity. To obtain a flexible specific heat ratio, a parameter is introduced in the energy kinetic moments to control the extra degree of freedom. For binary mixture, the correspondence between the macroscopic fluid model and the DBM may be several-to-one. Five typical benchmark tests are used to verify and validate the model. Some interesting non-equilibrium results, which are not available in the NS model or the single-fluid DBM, are presented.
Numerical simulation of fluids plays an essential role in modeling many physical phenomena, such as weather, climate, aerodynamics and plasma physics. Fluids are well described by the Navier-Stokes equations, but solving these equations at scale remains daunting, limited by the computational cost of resolving the smallest spatiotemporal features. This leads to unfavorable trade-offs between accuracy and tractability. Here we use end-to-end deep learning to improve approximations inside computational fluid dynamics for modeling two-dimensional turbulent flows. For both direct numerical simulation of turbulence and large eddy simulation, our results are as accurate as baseline solvers with 8-10x finer resolution in each spatial dimension, resulting in 40-80x fold computational speedups. Our method remains stable during long simulations, and generalizes to forcing functions and Reynolds numbers outside of the flows where it is trained, in contrast to black box machine learning approaches. Our approach exemplifies how scientific computing can leverage machine learning and hardware accelerators to improve simulations without sacrificing accuracy or generalization.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا