No Arabic abstract
Scaling regions -- intervals on a graph where the dependent variable depends linearly on the independent variable -- abound in dynamical systems, notably in calculations of invariants like the correlation dimension or a Lyapunov exponent. In these applications, scaling regions are generally selected by hand, a process that is subjective and often challenging due to noise, algorithmic effects, and confirmation bias. In this paper, we propose an automated technique for extracting and characterizing such regions. Starting with a two-dimensional plot -- e.g., the values of the correlation integral, calculated using the Grassberger-Procaccia algorithm over a range of scales -- we create an ensemble of intervals by considering all possible combinations of endpoints, generating a distribution of slopes from least-squares fits weighted by the length of the fitting line and the inverse square of the fit error. The mode of this distribution gives an estimate of the slope of the scaling region (if it exists). The endpoints of the intervals that correspond to the mode provide an estimate for the extent of that region. When there is no scaling region, the distributions will be wide and the resulting error estimates for the slope will be large. We demonstrate this method for computations of dimension and Lyapunov exponent for several dynamical systems, and show that it can be useful in selecting values for the parameters in time-delay reconstructions.
Shadowing trajectories are model trajectories consistent with a sequence of observations of a system, given a distribution of observational noise. The existence of such trajectories is a desirable property of any forecast model. Gradient descent of indeterminism is a well-established technique for finding shadowing trajectories in low-dimensional analytical systems. Here we apply it to the thermally-driven rotating annulus, a laboratory experiment intermediate in model complexity and physical idealisation between analytical systems and global, comprehensive atmospheric models. We work in the perfect model scenario using the MORALS model to generate a sequence of noisy observations in a chaotic flow regime. We demonstrate that the gradient descent technique recovers a pseudo-orbit of model states significantly closer to a model trajectory than the initial sequence. Gradient-free descent is used, where the adjoint model is set to $lambda$I in the absence of a full adjoint model. The indeterminism of the pseudo-orbit falls by two orders of magnitude during the descent, but we find that the distance between the pseudo-orbit and the initial, true, model trajectory reaches a minimum and then diverges from truth. We attribute this to the use of the $lambda$-adjoint, which is well suited to noise reduction but not to finely-tuned convergence towards a model trajectory. We find that $lambda=0.25$ gives optimal results, and that candidate model trajectories begun from this pseudo-orbit shadow the observations for up to 80 s, about the length of the longest timescale of the system, and similar to expected shadowing times based on the distance between the pseudo-orbit and the truth. There is great potential for using this method with real laboratory data.
An intuitively necessary requirement of models used to provide forecasts of a systems future is the existence of shadowing trajectories that are consistent with past observations of the system: given a system-model pair, do model trajectories exist that stay reasonably close to a sequence of observations of the system? Techniques for finding such trajectories are well-understood in low-dimensional systems, but there is significant interest in their application to high-dimensional weather and climate models. We build on work by Smith et al. [2010, Phys. Lett. A, 374, 2618-2623] and develop a method for measuring the time that individual candidate trajectories of high-dimensional models shadow observations, using a model of the thermally-driven rotating annulus in the perfect model scenario. Models of the annulus are intermediate in complexity between low-dimensional systems and global atmospheric models. We demonstrate our method by measuring shadowing times against artificially-generated observations for candidate trajectories beginning a fixed distance from truth in one of the annulus chaotic flow regimes. The distribution of candidate shadowing times we calculated using our method corresponds closely to (1) the range of times over which the trajectories visually diverge from the observations and (2) the divergence time using a simple metric based on the distance between model trajectory and observations. An empirical relationship between the expected candidate shadowing times and the initial distance from truth confirms that the method behaves reasonably as parameters are varied.
The problem of reconstructing nonlinear and complex dynamical systems from measured data or time series is central to many scientific disciplines including physical, biological, computer, and social sciences, as well as engineering and economics. In this paper, we review the recent advances in this forefront and rapidly evolving field, aiming to cover topics such as compressive sensing (a novel optimization paradigm for sparse-signal reconstruction), noised-induced dynamical mapping, perturbations, reverse engineering, synchronization, inner composition alignment, global silencing, Granger Causality and alternative optimization algorithms. Often, these rely on various concepts from statistical and nonlinear physics such as phase transitions, bifurcation, stabilities, and robustness. The methodologies have the potential to significantly improve our ability to understand a variety of complex dynamical systems ranging from gene regulatory systems to social networks towards the ultimate goal of controlling such systems. Despite recent progress, many challenges remain. A purpose of this Review is then to point out the specific difficulties as they arise from different contexts, so as to stimulate further efforts in this interdisciplinary field.
Deep learning has emerged as a technique of choice for rapid feature extraction across imaging disciplines, allowing rapid conversion of the data streams to spatial or spatiotemporal arrays of features of interest. However, applications of deep learning in experimental domains are often limited by the out-of-distribution drift between the experiments, where the network trained for one set of imaging conditions becomes sub-optimal for different ones. This limitation is particularly stringent in the quest to have an automated experiment setting, where retraining or transfer learning becomes impractical due to the need for human intervention and associated latencies. Here we explore the reproducibility of deep learning for feature extraction in atom-resolved electron microscopy and introduce workflows based on ensemble learning and iterative training to greatly improve feature detection. This approach both allows incorporating uncertainty quantification into the deep learning analysis and also enables rapid automated experimental workflows where retraining of the network to compensate for out-of-distribution drift due to subtle change in imaging conditions is substituted for a human operator or programmatic selection of networks from the ensemble. This methodology can be further applied to machine learning workflows in other imaging areas including optical and chemical imaging.
An extremely challenging problem of significant interest is to predict catastrophes in advance of their occurrences. We present a general approach to predicting catastrophes in nonlinear dynamical systems under the assumption that the system equations are completely unknown and only time series reflecting the evolution of the dynamical variables of the system are available. Our idea is to expand the vector field or map of the underlying system into a suitable function series and then to use the compressive-sensing technique to accurately estimate the various terms in the expansion. Examples using paradigmatic chaotic systems are provided to demonstrate our idea.