No Arabic abstract
There are two main strategies for improving the projection-based reduced order model (ROM) accuracy: (i) improving the ROM, i.e., adding new terms to the standard ROM; and (ii) improving the ROM basis, i.e., constructing ROM bases that yield more accurate ROMs. In this paper, we use the latter. We propose new Lagrangian inner products that we use together with Eulerian and Lagrangian data to construct new Lagrangian ROMs. We show that the new Lagrangian ROMs are orders of magnitude more accurate than the standard Eulerian ROMs, i.e., ROMs that use standard Eulerian inner product and data to construct the ROM basis. Specifically, for the quasi-geostrophic equations, we show that the new Lagrangian ROMs are more accurate than the standard Eulerian ROMs in approximating not only Lagrangian fields (e.g., the finite time Lyapunov exponent (FTLE)), but also Eulerian fields (e.g., the streamfunction). We emphasize that the new Lagrangian ROMs do not employ any closure modeling to model the effect of discarded modes (which is standard procedure for low-dimensional ROMs of complex nonlinear systems). Thus, the dramatic increase in the new Lagrangian ROMs accuracy is entirely due to the novel Lagrangian inner products used to build the Lagrangian ROM basis.
Model reduction for fluid flow simulation continues to be of great interest across a number of scientific and engineering fields. Here, we explore the use of Neural Ordinary Differential Equations, a recently introduced family of continuous-depth, differentiable networks (Chen et al 2018), as a way to propagate latent-space dynamics in reduced order models. We compare their behavior with two classical non-intrusive methods based on proper orthogonal decomposition and radial basis function interpolation as well as dynamic mode decomposition. The test problems we consider include incompressible flow around a cylinder as well as real-world applications of shallow water hydrodynamics in riverine and estuarine systems. Our findings indicate that Neural ODEs provide an elegant framework for stable and accurate evolution of latent-space dynamics with a promising potential of extrapolatory predictions. However, in order to facilitate their widespread adoption for large-scale systems, significant effort needs to be directed at accelerating their training times. This will enable a more comprehensive exploration of the hyperparameter space for building generalizable Neural ODE approximations over a wide range of system dynamics.
Generalizability of machine-learning (ML) based turbulence closures to accurately predict unseen practical flows remains an important challenge. It is well recognized that the ML neural network architecture and training protocol profoundly influence the generalizability characteristics. The objective of this work is to identify the unique challenges in finding the ML closure network hyperparameters that arise due to the inherent complexity of turbulence. Three proxy-physics turbulence surrogates of different degrees of complexity (yet significantly simpler than turbulence physics) are employed. The proxy-physics models mimic some of the key features of turbulence and provide training/testing data at low computational expense. The focus is on the following turbulence features: high dimensionality of flow physics parameter space, non-linearity effects and bifurcations in emergent behavior. A standard fully-connected neural network is used to reproduce the data of simplified proxy-physics turbulence surrogates. Lacking a rigorous procedure to find globally optimal ML neural network hyperparameters, a brute-force parameter-space sweep is performed to examine the existence of locally optimal solution. Even for this simple case, it is demonstrated that the choice of the optimal hyperparameters for a fully-connected neural network is not straightforward when it is trained with the partially available data in parameter space. Overall, specific issues to be addressed are identified, and the findings provide a realistic perspective on the utility of ML turbulence closures for practical applications.
Advanced nuclear reactors often exhibit complex thermal-fluid phenomena during transients. To accurately capture such phenomena, a coarse-mesh three-dimensional (3-D) modeling capability is desired for modern nuclear-system code. In the coarse-mesh 3-D modeling of advanced-reactor transients that involve flow and heat transfer, accurately predicting the turbulent viscosity is a challenging task that requires an accurate and computationally efficient model to capture the unresolved fine-scale turbulence. In this paper, we propose a data-driven coarse-mesh turbulence model based on local flow features for the transient analysis of thermal mixing and stratification in a sodium-cooled fast reactor. The model has a coarse-mesh setup to ensure computational efficiency, while it is trained by fine-mesh computational fluid dynamics (CFD) data to ensure accuracy. A novel neural network architecture, combining a densely connected convolutional network and a long-short-term-memory network, is developed that can efficiently learn from the spatial-temporal CFD transient simulation results. The neural network model was trained and optimized on a loss-of-flow transient and demonstrated high accuracy in predicting the turbulent viscosity field during the whole transient. The trained models generalization capability was also investigated on two other transients with different inlet conditions. The study demonstrates the potential of applying the proposed data-driven approach to support the coarse-mesh multi-dimensional modeling of advanced reactors.
In recent years, there have been a surge in applications of neural networks (NNs) in physical sciences. Although various algorithmic advances have been proposed, there are, thus far, limited number of studies that assess the interpretability of neural networks. This has contributed to the hasty characterization of most NN methods as black boxes and hindering wider acceptance of more powerful machine learning algorithms for physics. In an effort to address such issues in fluid flow modeling, we use a probabilistic neural network (PNN) that provide confidence intervals for its predictions in a computationally effective manner. The model is first assessed considering the estimation of proper orthogonal decomposition (POD) coefficients from local sensor measurements of solution of the shallow water equation. We find that the present model outperforms a well-known linear method with regard to estimation. This model is then applied to the estimation of the temporal evolution of POD coefficients with considering the wake of a NACA0012 airfoil with a Gurney flap and the NOAA sea surface temperature. The present model can accurately estimate the POD coefficients over time in addition to providing confidence intervals thereby quantifying the uncertainty in the output given a particular training data set.
We address the question whether a singularity in a three-dimensional incompressible inviscid fluid flow can occur in finite time. Analytical considerations and numerical simulations suggest high-symmetry flows being a promising candidate for a finite-time blowup. Utilizing Lagrangian and geometric non-blowup criteria, we present numerical evidence against the formation of a finite-time singularity for the high-symmetry vortex dodecapole initial condition. We use data obtained from high resolution adaptively refined numerical simulations and inject Lagrangian tracer particles to monitor geometric properties of vortex line segments. We then verify the assumptions made by analytical non-blowup criteria introduced by Deng et. al [Commun. PDE 31 (2006)] connecting vortex line geometry (curvature, spreading) to velocity increase to rule out singular behavior.