Do you want to publish a course? Click here

Predicting Critical Transitions in Multiscale Dynamical Systems Using Reservoir Computing

76   0   0.0 ( 0 )
 Added by Soon Hoe Lim
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

We study the problem of predicting rare critical transition events for a class of slow-fast nonlinear dynamical systems. The state of the system of interest is described by a slow process, whereas a faster process drives its evolution and induces critical transitions. By taking advantage of recent advances in reservoir computing, we present a data-driven method to predict the future evolution of the state. We show that our method is capable of predicting a critical transition event at least several numerical time steps in advance. We demonstrate the success as well as the limitations of our method using numerical experiments on three examples of systems, ranging from low dimensional to high dimensional. We discuss the mathematical and broader implications of our results.



rate research

Read More

Closed quantum systems exhibit different dynamical regimes, like Many-Body Localization or thermalization, which determine the mechanisms of spread and processing of information. Here we address the impact of these dynamical phases in quantum reservoir computing, an unconventional computing paradigm recently extended into the quantum regime that exploits dynamical systems to solve nonlinear and temporal tasks. We establish that the thermal phase is naturally adapted to the requirements of quantum reservoir computing and report an increased performance at the thermalization transition for the studied tasks. Uncovering the underlying physical mechanisms behind optimal information processing capabilities of spin networks is essential for future experimental implementations and provides a new perspective on dynamical phases.
Machine learning models have emerged as powerful tools in physics and engineering. Although flexible, a fundamental challenge remains on how to connect new machine learning models with known physics. In this work, we present an autoencoder with latent space penalization, which discovers finite dimensional manifolds underlying the partial differential equations of physics. We test this method on the Kuramoto-Sivashinsky (K-S), Korteweg-de Vries (KdV), and damped KdV equations. We show that the resulting optimal latent space of the K-S equation is consistent with the dimension of the inertial manifold. The results for the KdV equation imply that there is no reduced latent space, which is consistent with the truly infinite dimensional dynamics of the KdV equation. In the case of the damped KdV equation, we find that the number of active dimensions decreases with increasing damping coefficient. We then uncover a nonlinear basis representing the manifold of the latent space for the K-S equation.
Reservoir computers (RC) are a form of recurrent neural network (RNN) used for forecasting timeseries data. As with all RNNs, selecting the hyperparameters presents a challenge when training onnew inputs. We present a method based on generalized synchronization (GS) that gives direction in designing and evaluating the architecture and hyperparameters of an RC. The auxiliary method for detecting GS provides a computationally efficient pre-training test that guides hyperparameterselection. Furthermore, we provide a metric for RC using the reproduction of the input systems Lyapunov exponentsthat demonstrates robustness in prediction.
Stochastic dynamical systems with continuous symmetries arise commonly in nature and often give rise to coherent spatio-temporal patterns. However, because of their random locations, these patterns are not well captured by current order reduction techniques and a large number of modes is typically necessary for an accurate solution. In this work, we introduce a new methodology for efficient order reduction of such systems by combining (i) the method of slices, a symmetry reduction tool, with (ii) any standard order reduction technique, resulting in efficient mixed symmetry-dimensionality reduction schemes. In particular, using the Dynamically Orthogonal (DO) equations in the second step, we obtain a novel nonlinear Symmetry-reduced Dynamically Orthogonal (SDO) scheme. We demonstrate the performance of the SDO scheme on stochastic solutions of the 1D Korteweg-de Vries and 2D Navier-Stokes equations.
77 - Chensen Lin , Zhen Li , Lu Lu 2020
Simulating and predicting multiscale problems that couple multiple physics and dynamics across many orders of spatiotemporal scales is a great challenge that has not been investigated systematically by deep neural networks (DNNs). Herein, we develop a framework based on operator regression, the so-called deep operator network (DeepONet), with the long term objective to simplify multiscale modeling by avoiding the fragile and time-consuming hand-shaking interface algorithms for stitching together heterogeneous descriptions of multiscale phenomena. To this end, as a first step, we investigate if a DeepONet can learn the dynamics of different scale regimes, one at the deterministic macroscale and the other at the stochastic microscale regime with inherent thermal fluctuations. Specifically, we test the effectiveness and accuracy of DeepONet in predicting multirate bubble growth dynamics, which is described by a Rayleigh-Plesset (R-P) equation at the macroscale and modeled as a stochastic nucleation and cavitation process at the microscale by dissipative particle dynamics (DPD). Taken together, our findings demonstrate that DeepONets can be employed to unify the macroscale and microscale models of the multirate bubble growth problem, hence providing new insight into the role of operator regression via DNNs in tackling realistic multiscale problems and in simplifying modeling with heterogeneous descriptions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا