Do you want to publish a course? Click here

Nonlinear mode decomposition with convolutional neural networks for fluid dynamics

112   0   0.0 ( 0 )
 Added by Koji Fukagata
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present a new nonlinear mode decomposition method to visualize the decomposed flow fields, named the mode decomposing convolutional neural network autoencoder (MD-CNN-AE). The proposed method is applied to a flow around a circular cylinder at $Re_D=100$ as a test case. The flow attributes are mapped into two modes in the latent space and then these two modes are visualized in the physical space. Because the MD-CNN-AEs with nonlinear activation functions show lower reconstruction errors than the proper orthogonal decomposition (POD), the nonlinearity contained in the activation function is considered the key to improve the capability of the model. It is found by applying POD to each field decomposed using the MD-CNN-AE with hyperbolic tangent activation that a single nonlinear MD-CNN-AE mode contains multiple orthogonal bases, in contrast to the linear methods, i.e., POD and the MD-CNN-AE with linear activation. We further assess the proposed MD-CNN-AE by applying it to a transient process of a circular cylinder wake in order to examine its capability for flows containing high-order spatial modes. The present results suggest a great potential for the nonlinear MD-CNN-AE to be used for feature extraction of flow fields in lower dimension than POD, while retaining interpretable relationships with the conventional POD modes.



rate research

Read More

We propose a customized convolutional neural network based autoencoder called a hierarchical autoencoder, which allows us to extract nonlinear autoencoder modes of flow fields while preserving the contribution order of the latent vectors. As preliminary tests, the proposed method is first applied to a cylinder wake at $Re_D$ = 100 and its transient process. It is found that the proposed method can extract the features of these laminar flow fields as the latent vectors while keeping the order of their energy content. The present hierarchical autoencoder is further assessed with a two-dimensional $y-z$ cross-sectional velocity field of turbulent channel flow at $Re_{tau}$ = 180 in order to examine its applicability to turbulent flows. It is demonstrated that the turbulent flow field can be efficiently mapped into the latent space by utilizing the hierarchical model with a concept of ordered autoencoder mode family. The present results suggest that the proposed concept can be extended to meet various demands in fluid dynamics including reduced order modeling and its combination with linear theory-based methods by using its ability to arrange the order of the extracted nonlinear modes.
OpenSBLI is an open-source code-generation system for compressible fluid dynamics (CFD) on heterogeneous computing architectures. Written in Python, OpenSBLI is an explicit high-order finite-difference solver on structured curvilinear meshes. Shock-capturing is performed by a choice of high-order Weighted Essentially Non-Oscillatory (WENO) or Targeted Essentially Non-Oscillatory (TENO) schemes. OpenSBLI generates a complete CFD solver in the Oxford Parallel Structured (OPS) domain specific language. The OPS library is embedded in C code, enabling massively-parallel execution of the code on a variety of high-performance-computing architectures, including GPUs. The present paper presents a code base that has been completely rewritten from the earlier proof of concept (Jacobs et al, JoCS 18 (2017), 12-23), allowing shock capturing, coordinate transformations for complex geometries, and a wide range of boundary conditions, including solid walls with and without heat transfer. A suite of validation and verification cases are presented, plus demonstration of a large-scale Direct Numerical Simulation (DNS) of a transitional Shockwave Boundary Layer Interaction (SBLI). The code is shown to have good weak and strong scaling on multi-GPU clusters. We demonstrate that code-generation and domain specific languages are suitable for performing efficient large-scale simulations of complex fluid flows on emerging computing architectures.
Reduced Order Modeling (ROM) for engineering applications has been a major research focus in the past few decades due to the unprecedented physical insight into turbulence offered by high-fidelity CFD. The primary goal of a ROM is to model the key physics/features of a flow-field without computing the full Navier-Stokes (NS) equations. This is accomplished by projecting the high-dimensional dynamics to a low-dimensional subspace, typically utilizing dimensionality reduction techniques like Proper Orthogonal Decomposition (POD), coupled with Galerkin projection. In this work, we demonstrate a deep learning based approach to build a ROM using the POD basis of canonical DNS datasets, for turbulent flow control applications. We find that a type of Recurrent Neural Network, the Long Short Term Memory (LSTM) which has been primarily utilized for problems like speech modeling and language translation, shows attractive potential in modeling temporal dynamics of turbulence. Additionally, we introduce the Hurst Exponent as a tool to study LSTM behavior for non-stationary data, and uncover useful characteristics that may aid ROM development for a variety of applications.
In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and limited data. In this work, we propose a shallow neural network-based learning methodology for such fluid flow reconstruction. Our approach learns an end-to-end mapping between the sensor measurements and the high-dimensional fluid flow field, without any heavy preprocessing on the raw data. No prior knowledge is assumed to be available, and the estimation method is purely data-driven. We demonstrate the performance on three examples in fluid mechanics and oceanography, showing that this modern data-driven approach outperforms traditional modal approximation techniques which are commonly used for flow reconstruction. Not only does the proposed method show superior performance characteristics, it can also produce a comparable level of performance with traditional methods in the area, using significantly fewer sensors. Thus, the mathematical architecture is ideal for emerging global monitoring technologies where measurement data are often limited.
Particle-in-Cell (PIC) methods are widely used computational tools for fluid and kinetic plasma modeling. While both the fluid and kinetic PIC approaches have been successfully used to target either kinetic or fluid simulations, little was done to combine fluid and kinetic particles under the same PIC framework. This work addresses this issue by proposing a new PIC method, PolyPIC, that uses polymorphic computational particles. In this numerical scheme, particles can be either kinetic or fluid, and fluid particles can become kinetic when necessary, e.g. particles undergoing a strong acceleration. We design and implement the PolyPIC method, and test it against the Landau damping of Langmuir and ion acoustic waves, two stream instability and sheath formation. We unify the fluid and kinetic PIC methods under one common framework comprising both fluid and kinetic particles, providing a tool for adaptive fluid-kinetic coupling in plasma simulations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا