Do you want to publish a course? Click here

A Deep Learning based Approach to Reduced Order Modeling for Turbulent Flow Control using LSTM Neural Networks

93   0   0.0 ( 0 )
 Added by Arvind Mohan
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Reduced Order Modeling (ROM) for engineering applications has been a major research focus in the past few decades due to the unprecedented physical insight into turbulence offered by high-fidelity CFD. The primary goal of a ROM is to model the key physics/features of a flow-field without computing the full Navier-Stokes (NS) equations. This is accomplished by projecting the high-dimensional dynamics to a low-dimensional subspace, typically utilizing dimensionality reduction techniques like Proper Orthogonal Decomposition (POD), coupled with Galerkin projection. In this work, we demonstrate a deep learning based approach to build a ROM using the POD basis of canonical DNS datasets, for turbulent flow control applications. We find that a type of Recurrent Neural Network, the Long Short Term Memory (LSTM) which has been primarily utilized for problems like speech modeling and language translation, shows attractive potential in modeling temporal dynamics of turbulence. Additionally, we introduce the Hurst Exponent as a tool to study LSTM behavior for non-stationary data, and uncover useful characteristics that may aid ROM development for a variety of applications.



rate research

Read More

111 - Suraj Pawar , Romit Maulik 2020
Several applications in the scientific simulation of physical systems can be formulated as control/optimization problems. The computational models for such systems generally contain hyperparameters, which control solution fidelity and computational expense. The tuning of these parameters is non-trivial and the general approach is to manually `spot-check for good combinations. This is because optimal hyperparameter configuration search becomes impractical when the parameter space is large and when they may vary dynamically. To address this issue, we present a framework based on deep reinforcement learning (RL) to train a deep neural network agent that controls a model solve by varying parameters dynamically. First, we validate our RL framework for the problem of controlling chaos in chaotic systems by dynamically changing the parameters of the system. Subsequently, we illustrate the capabilities of our framework for accelerating the convergence of a steady-state CFD solver by automatically adjusting the relaxation factors of discretized Navier-Stokes equations during run-time. The results indicate that the run-time control of the relaxation factors by the learned policy leads to a significant reduction in the number of iterations for convergence compared to the random selection of the relaxation factors. Our results point to potential benefits for learning adaptive hyperparameter learning strategies across different geometries and boundary conditions with implications for reduced computational campaign expenses. footnote{Data and codes available at url{https://github.com/Romit-Maulik/PAR-RL}}
In this work, we develop Non-Intrusive Reduced Order Models (NIROMs) that combine Proper Orthogonal Decomposition (POD) with a Radial Basis Function (RBF) interpolation method to construct efficient reduced order models for time-dependent problems arising in large scale environmental flow applications. The performance of the POD-RBF NIROM is compared with a traditional nonlinear POD (NPOD) model by evaluating the accuracy and robustness for test problems representative of riverine flows. Different greedy algorithms are studied in order to determine a near-optimal distribution of interpolation points for the RBF approximation. A new power-scaled residual greedy (psr-greedy) algorithm is proposed to address some of the primary drawbacks of the existing greedy approaches. The relative performances of these greedy algorithms are studied with numerical experiments using realistic two-dimensional (2D) shallow water flow applications involving coastal and riverine dynamics.
A new kinetic model for multiphase flow was presented under the framework of the discrete Boltzmann method (DBM). Significantly different from the previous DBM, a bottom-up approach was adopted in this model. The effects of molecular size and repulsion potential were described by the Enskog collision model; the attraction potential was obtained through the mean-field approximation method. The molecular interactions, which result in the non-ideal equation of state and surface tension, were directly introduced as an external force term. Several typical benchmark problems, including Couette flow, two-phase coexistence curve, the Laplace law, phase separation, and the collision of two droplets, were simulated to verify the model. Especially, for two types of droplet collisions, the strengths of two non-equilibrium effects, $bar{D}_2^*$ and $bar{D}_3^*$, defined through the second and third order non-conserved kinetic moments of $(f - f ^{eq})$, are comparatively investigated, where $f$ ($f^{eq}$) is the (equilibrium) distribution function. It is interesting to find that during the collision process, $bar{D}_2^*$ is always significantly larger than $bar{D}_3^*$, $bar{D}_2^*$ can be used to identify the different stages of the collision process and to distinguish different types of collisions. The modeling method can be directly extended to a higher-order model for the case where the non-equilibrium effect is strong, and the linear constitutive law of viscous stress is no longer valid.
Advection-dominated dynamical systems, characterized by partial differential equations, are found in applications ranging from weather forecasting to engineering design where accuracy and robustness are crucial. There has been significant interest in the use of techniques borrowed from machine learning to reduce the computational expense and/or improve the accuracy of predictions for these systems. These rely on the identification of a basis that reduces the dimensionality of the problem and the subsequent use of time series and sequential learning methods to forecast the evolution of the reduced state. Often, however, machine-learned predictions after reduced-basis projection are plagued by issues of stability stemming from incomplete capture of multiscale processes as well as due to error growth for long forecast durations. To address these issues, we have developed a emph{non-autoregressive} time series approach for predicting linear reduced-basis time histories of forward models. In particular, we demonstrate that non-autoregressive counterparts of sequential learning methods such as long short-term memory (LSTM) considerably improve the stability of machine-learned reduced-order models. We evaluate our approach on the inviscid shallow water equations and show that a non-autoregressive variant of the standard LSTM approach that is bidirectional in the PCA components obtains the best accuracy for recreating the nonlinear dynamics of partial observations. Moreover---and critical for many applications of these surrogates---inference times are reduced by three orders of magnitude using our approach, compared with both the equation-based Galerkin projection method and the standard LSTM approach.
A novel hybrid deep neural network architecture is designed to capture the spatial-temporal features of unsteady flows around moving boundaries directly from high-dimensional unsteady flow fields data. The hybrid deep neural network is constituted by the convolutional neural network (CNN), improved convolutional Long-Short Term Memory neural network (ConvLSTM) and deconvolutional neural network (DeCNN). Flow fields at future time step can be predicted through flow fields by previous time steps and boundary positions at those steps by the novel hybrid deep neural network. Unsteady wake flows around a forced oscillation cylinder with various amplitudes are calculated to establish the datasets as training samples for training the hybrid deep neural networks. The trained hybrid deep neural networks are then tested by predicting the unsteady flow fields around a forced oscillation cylinder with new amplitude. The effect of neural network structure parameters on prediction accuracy was analyzed. The hybrid deep neural network, constituted by the best parameter combination, is used to predict the flow fields in the future time. The predicted flow fields are in good agreement with those calculated directly by computational fluid dynamic solver, which means that this kind of deep neural network can capture accurate spatial-temporal information from the spatial-temporal series of unsteady flows around moving boundaries. The result shows the potential capability of this kind novel hybrid deep neural network in flow control for vibrating cylinder, where the fast calculation of high-dimensional nonlinear unsteady flow around moving boundaries is needed.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا