No Arabic abstract
Starting from measured data, we develop a method to compute the fine structure of the spectrum of the Koopman operator with rigorous convergence guarantees. The method is based on the observation that, in the measure-preserving ergodic setting, the moments of the spectral measure associated to a given observable are computable from a single trajectory of this observable. Having finitely many moments available, we use the classical Christoffel-Darboux kernel to separate the atomic and absolutely continuous parts of the spectrum, supported by convergence guarantees as the number of moments tends to infinity. In addition, we propose a technique to detect the singular continuous part of the spectrum as well as two methods to approximate the spectral measure with guaranteed convergence in the weak topology, irrespective of whether the singular continuous part is present or not. The proposed method is simple to implement and readily applicable to large-scale systems since the computational complexity is dominated by inverting an $Ntimes N$ Hermitian positive-definite Toeplitz matrix, where $N$ is the number of moments, for which efficient and numerically stable algorithms exist; in particular, the complexity of the approach is independent of the dimension of the underlying state-space. We also show how to compute, from measured data, the spectral projection on a given segment of the unit circle, allowing us to obtain a finite-dimensional approximation of the operator that explicitly takes into account the point and continuous parts of the spectrum. Finally, we describe a relationship between the proposed method and the so-called Hankel Dynamic Mode Decomposition, providing new insights into the behavior of the eigenvalues of the Hankel DMD operator. A number of numerical examples illustrate the approach, including a study of the spectrum of the lid-driven two-dimensional cavity flow.
Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables that spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky PDE as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.
We analyze the performance of Dynamic Mode Decomposition (DMD)-based approximations of the stochastic Koopman operator for random dynamical systems where either the dynamics or observables are affected by noise. Under certain ergodicity assumptions, we show that standard DMD algorithms converge provided the observables do not contain any noise and span an invariant subspace of the stochastic Koopman operator. For observables with noise, we introduce a new, robust DMD algorithm that can approximate the stochastic Koopman operator and demonstrate how this algorithm can be applied to Krylov subspace based methods using a single observable measured over a single trajectory. We test the performance of the algorithms over several examples.
We study numerical approaches to computation of spectral properties of composition operators. We provide a characterization of Koopman Modes in Banach spaces using Generalized Laplace Analysis. We cast the Dynamic Mode-Decomposition type methods in the context of Finite Section theory of infinite dimensional operators, and provide an example of a mixing map for which the finite section method fails. Under assumptions on the underlying dynamics, we provide the first result on the convergence rate under sample size increase in the finite-section approximation. We study the error in the Krylov subspace version of the finite section method and prove convergence in pseudospectral sense for operators with pure point spectrum. This result indicates that Krylov sequence-based approximations can have low error without an exponential-in-dimension increase in the number of functions needed for approximation.
We provide an overview of the Koopman operator analysis for a class of partial differential equations describing relaxation of the field variable to a stable stationary state. We introduce Koopman eigenfunctionals of the system and use the notion of conjugacy to develop spectral expansion of the Koopman operator. For linear systems such as the diffusion equation, the Koopman eigenfunctionals can be expressed as linear functionals of the field variable. The notion of inertial manifolds is shown to correspond to joint zero level sets of Koopman eigenfunctionals, and the notion of isostables is defined as the level sets of the slowest decaying Koopman eigenfunctional. Linear diffusion equation, nonlinear Burgers equation, and nonlinear phase-diffusion equation are analyzed as examples.
We provide a framework for learning of dynamical systems rooted in the concept of representations and Koopman operators. The interplay between the two leads to the full description of systems that can be represented linearly in a finite dimension, based on the properties of the Koopman operator spectrum. The geometry of state space is connected to the notion of representation, both in the linear case - where it is related to joint level sets of eigenfunctions - and in the nonlinear representation case. As shown here, even nonlinear finite-dimensional representations can be learned using the Koopman operator framework, leading to a new class of representation eigenproblems. The connection to learning using neural networks is given. An extension of the Koopman operator theory to static maps between different spaces is provided. The effect of the Koopman operator spectrum on Mori-Zwanzig type representations is discussed.