No Arabic abstract
Koopman mode analysis has provided a framework for analysis of nonlinear phenomena across a plethora of fields. Its numerical implementation via Dynamic Mode Decomposition (DMD) has been extensively deployed and improved upon over the last decade. We address the problems of mean subtraction and DMD mode selection in the context of finite dimensional Koopman invariant subspaces. Preprocessing of data by subtraction of the temporal mean of a time series has been a point of contention in companion matrix-based DMD. This stems from the potential of said preprocessing to render DMD equivalent to temporal DFT. We prove that this equivalence is impossible when the order of the DMD-based representation of the dynamics exceeds the dimension of the system. Moreover, this parity of DMD and DFT is mostly indicative of an inadequacy of data, in the sense that the number of snapshots taken is not enough to represent the true dynamics of the system. We then vindicate the practice of pruning DMD eigenvalues based on the norm of the respective modes. Once a minimum number of time delays has been taken, DMD eigenvalues corresponding to DMD modes with low norm are shown to be spurious, and hence must be discarded. When dealing with mean-subtracted data, the above criterion for detecting synthetic eigenvalues can be applied after additional pre-processing. This takes the form of an eigenvalue constraint on Companion DMD, or yet another time delay.
Dynamic Mode Decomposition (DMD) is a powerful tool for extracting spatial and temporal patterns from multi-dimensional time series, and it has been used successfully in a wide range of fields, including fluid mechanics, robotics, and neuroscience. Two of the main challenges remaining in DMD research are noise sensitivity and issues related to Krylov space closure when modeling nonlinear systems. Here, we investigate the combination of noise and nonlinearity in a controlled setting, by studying a class of systems with linear latent dynamics which are observed via multinomial observables. Our numerical models include system and measurement noise. We explore the influences of dataset metrics, the spectrum of the latent dynamics, the normality of the system matrix, and the geometry of the dynamics. Our results show that even for these very mildly nonlinear conditions, DMD methods often fail to recover the spectrum and can have poor predictive ability. Our work is motivated by our experience modeling multilegged robot data, where we have encountered great difficulty in reconstructing time series for oscillatory systems with slow transients, which decay only slightly faster than a period.
The Dynamic-Mode Decomposition (DMD) is a well established data-driven method of finding temporally evolving linear-mode decompositions of nonlinear time series. Traditionally, this method presumes that all relevant dimensions are sampled through measurement. To address dynamical systems in which the data may be incomplete or represent only partial observation of a more complex system, we extend the DMD algorithm by including a Mori-Zwanzig Decomposition to derive memory kernels that capture the averaged dynamics of the unresolved variables as projected onto the resolved dimensions. From this, we then derive what we call the Memory-Dependent Dynamic Mode Decomposition (MDDMD). Through numerical examples, the MDDMD method is shown to produce reasonable approximations of the ensemble-averaged dynamics of the full system given a single time series measurement of the resolved variables.
Extended dynamic mode decomposition (EDMD) provides a class of algorithms to identify patterns and effective degrees of freedom in complex dynamical systems. We show that the modes identified by EDMD correspond to those of compact Perron-Frobenius and Koopman operators defined on suitable Hardy-Hilbert spaces when the method is applied to classes of analytic maps. Our findings elucidate the interpretation of the spectra obtained by EDMD for complex dynamical systems. We illustrate our results by numerical simulations for analytic maps.
We employ the framework of the Koopman operator and dynamic mode decomposition to devise a computationally cheap and easily implementable method to detect transient dynamics and regime changes in time series. We argue that typically transient dynamics experiences the full state space dimension with subsequent fast relaxation towards the attractor. In equilibrium, on the other hand, the dynamics evolves on a slower time scale on a lower dimensional attractor. The reconstruction error of a dynamic mode decomposition is used to monitor the inability of the time series to resolve the fast relaxation towards the attractor as well as the effective dimension of the dynamics. We illustrate our method by detecting transient dynamics in the Kuramoto-Sivashinsky equation. We further apply our method to atmospheric reanalysis data; our diagnostics detects the transition from a predominantly negative North Atlantic Oscillation (NAO) to a predominantly positive NAO around 1970, as well as the recently found regime change in the Southern Hemisphere atmospheric circulation around 1970.
Research in modern data-driven dynamical systems is typically focused on the three key challenges of high dimensionality, unknown dynamics, and nonlinearity. The dynamic mode decomposition (DMD) has emerged as a cornerstone for modeling high-dimensional systems from data. However, the quality of the linear DMD model is known to be fragile with respect to strong nonlinearity, which contaminates the model estimate. In contrast, sparse identification of nonlinear dynamics (SINDy) learns fully nonlinear models, disambiguating the linear and nonlinear effects, but is restricted to low-dimensional systems. In this work, we present a kernel method that learns interpretable data-driven models for high-dimensional, nonlinear systems. Our method performs kernel regression on a sparse dictionary of samples that appreciably contribute to the underlying dynamics. We show that this kernel method efficiently handles high-dimensional data and is flexible enough to incorporate partial knowledge of system physics. It is possible to accurately recover the linear model contribution with this approach, disambiguating the effects of the implicitly defined nonlinear terms, resulting in a DMD-like model that is robust to strongly nonlinear dynamics. We demonstrate our approach on data from a wide range of nonlinear ordinary and partial differential equations that arise in the physical sciences. This framework can be used for many practical engineering tasks such as model order reduction, diagnostics, prediction, control, and discovery of governing laws.