No Arabic abstract
We employ the framework of the Koopman operator and dynamic mode decomposition to devise a computationally cheap and easily implementable method to detect transient dynamics and regime changes in time series. We argue that typically transient dynamics experiences the full state space dimension with subsequent fast relaxation towards the attractor. In equilibrium, on the other hand, the dynamics evolves on a slower time scale on a lower dimensional attractor. The reconstruction error of a dynamic mode decomposition is used to monitor the inability of the time series to resolve the fast relaxation towards the attractor as well as the effective dimension of the dynamics. We illustrate our method by detecting transient dynamics in the Kuramoto-Sivashinsky equation. We further apply our method to atmospheric reanalysis data; our diagnostics detects the transition from a predominantly negative North Atlantic Oscillation (NAO) to a predominantly positive NAO around 1970, as well as the recently found regime change in the Southern Hemisphere atmospheric circulation around 1970.
Koopman mode analysis has provided a framework for analysis of nonlinear phenomena across a plethora of fields. Its numerical implementation via Dynamic Mode Decomposition (DMD) has been extensively deployed and improved upon over the last decade. We address the problems of mean subtraction and DMD mode selection in the context of finite dimensional Koopman invariant subspaces. Preprocessing of data by subtraction of the temporal mean of a time series has been a point of contention in companion matrix-based DMD. This stems from the potential of said preprocessing to render DMD equivalent to temporal DFT. We prove that this equivalence is impossible when the order of the DMD-based representation of the dynamics exceeds the dimension of the system. Moreover, this parity of DMD and DFT is mostly indicative of an inadequacy of data, in the sense that the number of snapshots taken is not enough to represent the true dynamics of the system. We then vindicate the practice of pruning DMD eigenvalues based on the norm of the respective modes. Once a minimum number of time delays has been taken, DMD eigenvalues corresponding to DMD modes with low norm are shown to be spurious, and hence must be discarded. When dealing with mean-subtracted data, the above criterion for detecting synthetic eigenvalues can be applied after additional pre-processing. This takes the form of an eigenvalue constraint on Companion DMD, or yet another time delay.
The Dynamic-Mode Decomposition (DMD) is a well established data-driven method of finding temporally evolving linear-mode decompositions of nonlinear time series. Traditionally, this method presumes that all relevant dimensions are sampled through measurement. To address dynamical systems in which the data may be incomplete or represent only partial observation of a more complex system, we extend the DMD algorithm by including a Mori-Zwanzig Decomposition to derive memory kernels that capture the averaged dynamics of the unresolved variables as projected onto the resolved dimensions. From this, we then derive what we call the Memory-Dependent Dynamic Mode Decomposition (MDDMD). Through numerical examples, the MDDMD method is shown to produce reasonable approximations of the ensemble-averaged dynamics of the full system given a single time series measurement of the resolved variables.
Many natural systems undergo critical transitions, i.e. sudden shifts from one dynamical regime to another. In the climate system, the atmospheric boundary layer can experience sudden transitions between fully turbulent states and quiescent, quasi-laminar states. Such rapid transitions are observed in Polar regions or at night when the atmospheric boundary layer is stably stratified, and they have important consequences in the strength of mixing with the higher levels of the atmosphere. To analyze the stable boundary layer, many approaches rely on the identification of regimes that are commonly denoted as weakly and very stable regimes. Detecting transitions between the regimes is crucial for modeling purposes. In this work a combination of methods from dynamical systems and statistical modeling is applied to study these regime transitions and to develop an early-warning signal that can be applied to non-stationary field data. The presented metric aims at detecting nearing transitions by statistically quantifying the deviation from the dynamics expected when the system is close to a stable equilibrium. An idealized stochastic model of near-surface
Research in modern data-driven dynamical systems is typically focused on the three key challenges of high dimensionality, unknown dynamics, and nonlinearity. The dynamic mode decomposition (DMD) has emerged as a cornerstone for modeling high-dimensional systems from data. However, the quality of the linear DMD model is known to be fragile with respect to strong nonlinearity, which contaminates the model estimate. In contrast, sparse identification of nonlinear dynamics (SINDy) learns fully nonlinear models, disambiguating the linear and nonlinear effects, but is restricted to low-dimensional systems. In this work, we present a kernel method that learns interpretable data-driven models for high-dimensional, nonlinear systems. Our method performs kernel regression on a sparse dictionary of samples that appreciably contribute to the underlying dynamics. We show that this kernel method efficiently handles high-dimensional data and is flexible enough to incorporate partial knowledge of system physics. It is possible to accurately recover the linear model contribution with this approach, disambiguating the effects of the implicitly defined nonlinear terms, resulting in a DMD-like model that is robust to strongly nonlinear dynamics. We demonstrate our approach on data from a wide range of nonlinear ordinary and partial differential equations that arise in the physical sciences. This framework can be used for many practical engineering tasks such as model order reduction, diagnostics, prediction, control, and discovery of governing laws.
During the last decades there is a continuing international endeavor in developing realistic space weather prediction tools aiming to forecast the conditions on the Sun and in the interplanetary environment. These efforts have led to the need of developing appropriate metrics in order to assess the performance of those tools. Metrics are necessary for validating models, comparing different models and monitoring adjustments or improvements of a certain model over time. In this work, we introduce the Dynamic Time Warping (DTW) as an alternative way to validate models and, in particular, to quantify differences between observed and synthetic (modeled) time series for space weather purposes. We present the advantages and drawbacks of this method as well as applications on WIND observations and EUHFORIA modeled output at L1. We show that DTW is a useful tool that permits the evaluation of both the fast and slow solar wind. Its distinctive characteristic is that it warps sequences in time, aiming to align them with the minimum cost by using dynamic programming. It can be applied in two different ways for the evaluation of modeled solar wind time series. The first way calculates the so-called sequence similarity factor (SSF), a number that provides a quantification of how good the forecast is, compared to a best and a worst case prediction scenarios. The second way quantifies the time and amplitude differences between the points that are best matched between the two sequences. As a result, it can serve as a hybrid metric between continuous measurements (such as, e.g., the correlation coefficient) and point-by-point comparisons. We conclude that DTW is a promising technique for the assessment of solar wind profiles offering functions that other metrics do not, so that it can give at once the most complete evaluation profile of a model.