Do you want to publish a course? Click here

Bubbles in Turbulent Flows: Data-driven, kinematic models with memory terms

161   0   0.0 ( 0 )
 Added by Zhong Yi Wan
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present data driven kinematic models for the motion of bubbles in high-Re turbulent fluid flows based on recurrent neural networks with long-short term memory enhancements. The models extend empirical relations, such as Maxey-Riley (MR) and its variants, whose applicability is limited when either the bubble size is large or the flow is very complex. The recurrent neural networks are trained on the trajectories of bubbles obtained by Direct Numerical Simulations (DNS) of the Navier Stokes equations for a two-component incompressible flow model. Long short term memory components exploit the time history of the flow field that the bubbles have encountered along their trajectories and the networks are further augmented by imposing rotational invariance to their structure. We first train and validate the formulated model using DNS data for a turbulent Taylor-Green vortex. Then we examine the model predictive capabilities and its generalization to Reynolds numbers that are different from those of the training data on benchmark problems, including a steady (Hills spherical vortex) and an unsteady (Gaussian vortex ring) flow field. We find that the predictions of the developed model are significantly improved compared with those obtained by the MR equation. Our results indicate that data-driven models with history terms are well suited in capturing the trajectories of bubbles in turbulent flows.



rate research

Read More

The nonlinear and nonlocal coupling of vorticity and strain-rate constitutes a major hindrance in understanding the self-amplification of velocity gradients in turbulent fluid flows. Utilizing highly-resolved direct numerical simulations of isotropic turbulence in periodic domains of up to $12288^3$ grid points, and Taylor-scale Reynolds number $R_lambda$ in the range $140-1300$, we investigate this non-locality by decomposing the strain-rate tensor into local and non-local contributions obtained through Biot-Savart integration of vorticity in a sphere of radius $R$. We find that vorticity is predominantly amplified by the non-local strain coming beyond a characteristic scale size, which varies as a simple power-law of vorticity magnitude. The underlying dynamics preferentially align vorticity with the most extensive eigenvector of non-local strain. The remaining local strain aligns vorticity with the intermediate eigenvector and does not contribute significantly to amplification; instead it surprisingly attenuates intense vorticity, leading to breakdown of the observed power-law and ultimately also the scale-invariance of vorticity amplification, with important implications for prevailing intermittency theories.
We present two models for turbulent flows with periodic boundary conditions and with either rotation, or a magnetic field in the magnetohydrodynamics (MHD) limit. One model, based on Lagrangian averaging, can be viewed as an invariant-preserving filter, whereas the other model, based on spectral closures, generalizes the concepts of eddy viscosity and eddy noise. These models, when used separately or in conjunction, may lead to substantial savings for modeling high Reynolds number flows when checked against high resolution direct numerical simulations (DNS), the examples given here being run on grids of up to 1536^3 points.
We investigate the capability of neural network-based model order reduction, i.e., autoencoder (AE), for fluid flows. As an example model, an AE which comprises of a convolutional neural network and multi-layer perceptrons is considered in this study. The AE model is assessed with four canonical fluid flows, namely: (1) two-dimensional cylinder wake, (2) its transient process, (3) NOAA sea surface temperature, and (4) $y-z$ sectional field of turbulent channel flow, in terms of a number of latent modes, a choice of nonlinear activation functions, and a number of weights contained in the AE model. We find that the AE models are sensitive against the choice of the aforementioned parameters depending on the target flows. Finally, we foresee the extensional applications and perspectives of machine learning based order reduction for numerical and experimental studies in fluid dynamics community.
Modeling realistic fluid and plasma flows is computationally intensive, motivating the use of reduced-order models for a variety of scientific and engineering tasks. However, it is challenging to characterize, much less guarantee, the global stability (i.e., long-time boundedness) of these models. The seminal work of Schlegel and Noack (JFM, 2015) provided a theorem outlining necessary and sufficient conditions to ensure global stability in systems with energy-preserving, quadratic nonlinearities, with the goal of evaluating the stability of projection-based models. In this work, we incorporate this theorem into modern data-driven models obtained via machine learning. First, we propose that this theorem should be a standard diagnostic for the stability of projection-based and data-driven models, examining the conditions under which it holds. Second, we illustrate how to modify the objective function in machine learning algorithms to promote globally stable models, with implications for the modeling of fluid and plasma flows. Specifically, we introduce a modified trapping SINDy algorithm based on the sparse identification of nonlinear dynamics (SINDy) method. This method enables the identification of models that, by construction, only produce bounded trajectories. The effectiveness and accuracy of this approach are demonstrated on a broad set of examples of varying model complexity and physical origin, including the vortex shedding in the wake of a circular cylinder.
We present a new turbulent data reconstruction method with supervised machine learning techniques inspired by super resolution and inbetweening, which can recover high-resolution turbulent flows from grossly coarse flow data in space and time. For the present machine learning based data reconstruction, we use the downsampled skip-connection/multi-scale model based on a convolutional neural network to incorporate the multi-scale nature of fluid flows into its network structure. As an initial example, the model is applied to a two-dimensional cylinder wake at $Re_D$ = 100. The reconstructed flow fields by the proposed method show great agreement with the reference data obtained by direct numerical simulation. Next, we examine the capability of the proposed model for a two-dimensional decaying homogeneous isotropic turbulence. The machine-learned models can follow the decaying evolution from coarse input data in space and time, according to the assessment with the turbulence statistics. The proposed concept is further investigated for a complex turbulent channel flow over a three-dimensional domain at $Re_{tau}$ =180. The present model can reconstruct high-resolved turbulent flows from very coarse input data in space, and it can also reproduce the temporal evolution when the time interval is appropriately chosen. The dependence on the amount of training snapshots and duration between the first and last frames based on a temporal two-point correlation coefficient are also assessed to reveal the capability and robustness of spatio-temporal super resolution reconstruction. These results suggest that the present method can meet a range of flow reconstructions for supporting computational and experimental efforts.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا