No Arabic abstract
Advection-dominated dynamical systems, characterized by partial differential equations, are found in applications ranging from weather forecasting to engineering design where accuracy and robustness are crucial. There has been significant interest in the use of techniques borrowed from machine learning to reduce the computational expense and/or improve the accuracy of predictions for these systems. These rely on the identification of a basis that reduces the dimensionality of the problem and the subsequent use of time series and sequential learning methods to forecast the evolution of the reduced state. Often, however, machine-learned predictions after reduced-basis projection are plagued by issues of stability stemming from incomplete capture of multiscale processes as well as due to error growth for long forecast durations. To address these issues, we have developed a emph{non-autoregressive} time series approach for predicting linear reduced-basis time histories of forward models. In particular, we demonstrate that non-autoregressive counterparts of sequential learning methods such as long short-term memory (LSTM) considerably improve the stability of machine-learned reduced-order models. We evaluate our approach on the inviscid shallow water equations and show that a non-autoregressive variant of the standard LSTM approach that is bidirectional in the PCA components obtains the best accuracy for recreating the nonlinear dynamics of partial observations. Moreover---and critical for many applications of these surrogates---inference times are reduced by three orders of magnitude using our approach, compared with both the equation-based Galerkin projection method and the standard LSTM approach.
In this work, we develop Non-Intrusive Reduced Order Models (NIROMs) that combine Proper Orthogonal Decomposition (POD) with a Radial Basis Function (RBF) interpolation method to construct efficient reduced order models for time-dependent problems arising in large scale environmental flow applications. The performance of the POD-RBF NIROM is compared with a traditional nonlinear POD (NPOD) model by evaluating the accuracy and robustness for test problems representative of riverine flows. Different greedy algorithms are studied in order to determine a near-optimal distribution of interpolation points for the RBF approximation. A new power-scaled residual greedy (psr-greedy) algorithm is proposed to address some of the primary drawbacks of the existing greedy approaches. The relative performances of these greedy algorithms are studied with numerical experiments using realistic two-dimensional (2D) shallow water flow applications involving coastal and riverine dynamics.
In the spirit of making high-order discontinuous Galerkin (DG) methods more competitive, researchers have developed the hybridized DG methods, a class of discontinuous Galerkin methods that generalizes the Hybridizable DG (HDG), the Embedded DG (EDG) and the Interior Embedded DG (IEDG) methods. These methods are amenable to hybridization (static condensation) and thus to more computationally efficient implementations. Like other high-order DG methods, however, they may suffer from numerical stability issues in under-resolved fluid flow simulations. In this spirit, we introduce the hybridized DG methods for the compressible Euler and Navier-Stokes equations in entropy variables. Under a suitable choice of the stabilization matrix, the scheme can be shown to be entropy stable and satisfy the Second Law of Thermodynamics in an integral sense. The performance and robustness of the proposed family of schemes are illustrated through a series of steady and unsteady flow problems in subsonic, transonic, and supersonic regimes. The hybridized DG methods in entropy variables show the optimal accuracy order given by the polynomial approximation space, and are significantly superior to their counterparts in conservation variables in terms of stability and robustness, particularly for under-resolved and shock flows.
Reduced Order Modeling (ROM) for engineering applications has been a major research focus in the past few decades due to the unprecedented physical insight into turbulence offered by high-fidelity CFD. The primary goal of a ROM is to model the key physics/features of a flow-field without computing the full Navier-Stokes (NS) equations. This is accomplished by projecting the high-dimensional dynamics to a low-dimensional subspace, typically utilizing dimensionality reduction techniques like Proper Orthogonal Decomposition (POD), coupled with Galerkin projection. In this work, we demonstrate a deep learning based approach to build a ROM using the POD basis of canonical DNS datasets, for turbulent flow control applications. We find that a type of Recurrent Neural Network, the Long Short Term Memory (LSTM) which has been primarily utilized for problems like speech modeling and language translation, shows attractive potential in modeling temporal dynamics of turbulence. Additionally, we introduce the Hurst Exponent as a tool to study LSTM behavior for non-stationary data, and uncover useful characteristics that may aid ROM development for a variety of applications.
Non-intrusive reduced-order models (ROMs) have recently generated considerable interest for constructing computationally efficient counterparts of nonlinear dynamical systems emerging from various domain sciences. They provide a low-dimensional emulation framework for systems that may be intrinsically high-dimensional. This is accomplished by utilizing a construction algorithm that is purely data-driven. It is no surprise, therefore, that the algorithmic advances of machine learning have led to non-intrusive ROMs with greater accuracy and computational gains. However, in bypassing the utilization of an equation-based evolution, it is often seen that the interpretability of the ROM framework suffers. This becomes more problematic when black-box deep learning methods are used which are notorious for lacking robustness outside the physical regime of the observed data. In this article, we propose the use of a novel latent-space interpolation algorithm based on Gaussian process regression. Notably, this reduced-order evolution of the system is parameterized by control parameters to allow for interpolation in space. The use of this procedure also allows for a continuous interpretation of time which allows for temporal interpolation. The latter aspect provides information, with quantified uncertainty, about full-state evolution at a finer resolution than that utilized for training the ROMs. We assess the viability of this algorithm for an advection-dominated system given by the inviscid shallow water equations.
Celestial objects exhibit a wide range of variability in brightness at different wavebands. Surprisingly, the most common methods for characterizing time series in statistics -- parametric autoregressive modeling -- is rarely used to interpret astronomical light curves. We review standard ARMA, ARIMA and ARFIMA (autoregressive moving average fractionally integrated) models that treat short-memory autocorrelation, long-memory $1/f^alpha$ `red noise, and nonstationary trends. Though designed for evenly spaced time series, moderately irregular cadences can be treated as evenly-spaced time series with missing data. Fitting algorithms are efficient and software implementations are widely available. We apply ARIMA models to light curves of four variable stars, discussing their effectiveness for different temporal characteristics. A variety of extensions to ARIMA are outlined, with emphasis on recently developed continuous-time models like CARMA and CARFIMA designed for irregularly spaced time series. Strengths and weakness of ARIMA-type modeling for astronomical data analysis and astrophysical insights are reviewed.