Do you want to publish a course? Click here

A Deep Learning Functional Estimator of Optimal Dynamics for Sampling Large Deviations

88   0   0.0 ( 0 )
 Added by Tom Oakes
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

In stochastic systems, numerically sampling the relevant trajectories for the estimation of the large deviation statistics of time-extensive observables requires overcoming their exponential (in space and time) scarcity. The optimal way to access these rare events is by means of an auxiliary dynamics obtained from the original one through the so-called ``generalised Doob transformation. While this optimal dynamics is guaranteed to exist its use is often impractical, as to define it requires the often impossible task of diagonalising a (tilted) dynamical generator. While approximate schemes have been devised to overcome this issue they are difficult to automate as they tend to require knowledge of the systems under study. Here we address this problem from the perspective of deep learning. We devise an iterative semi-supervised learning scheme which converges to the optimal or Doob dynamics with the clear advantage of requiring no prior knowledge of the system. We test our method in a paradigmatic statistical mechanics model with non-trivial dynamical fluctuations, the fully packed classical dimer model on the square lattice, showing that it compares favourably with more traditional approaches. We discuss broader implications of our results for the study of rare dynamical trajectories.



rate research

Read More

The large deviation (LD) statistics of dynamical observables is encoded in the spectral properties of deformed Markov generators. Recent works have shown that tensor network methods are well suited to compute the relevant leading eigenvalues and eigenvectors accurately. However, the efficient generation of the corresponding rare trajectories is a harder task. Here we show how to exploit the MPS approximation of the dominant eigenvector to implement an efficient sampling scheme which closely resembles the optimal (so-called Doob) dynamics that realises the rare events. We demonstrate our approach on three well-studied lattice models, the Fredrickson-Andersen and East kinetically constrained models (KCMs), and the symmetric simple exclusion process (SSEP). We discuss how to generalise our approach to higher dimensions.
Simple models of irreversible dynamical processes such as Bootstrap Percolation have been successfully applied to describe cascade processes in a large variety of different contexts. However, the problem of analyzing non-typical trajectories, which can be crucial for the understanding of the out-of-equilibrium phenomena, is still considered to be intractable in most cases. Here we introduce an efficient method to find and analyze optimized trajectories of cascade processes. We show that for a wide class of irreversible dynamical rules, this problem can be solved efficiently on large-scale systems.
The one-point distribution of the height for the continuum Kardar-Parisi-Zhang (KPZ) equation is determined numerically using the mapping to the directed polymer in a random potential at high temperature. Using an importance sampling approach, the distribution is obtained over a large range of values, down to a probability density as small as $10^{-1000}$ in the tails. The short time behavior is investigated and compared with recent analytical predictions for the large-deviation forms of the probability of rare fluctuations, showing a spectacular agreement with the analytical expressions. The flat and stationary initial conditions are studied in the full space, together with the droplet initial condition in the half-space.
We use a neural network ansatz originally designed for the variational optimization of quantum systems to study dynamical large deviations in classical ones. We obtain the scaled cumulant-generating function for the dynamical activity of the Fredrickson-Andersen model, a prototypical kinetically constrained model, in one and two dimensions, and present the first size-scaling analysis of the dynamical activity in two dimensions. These results provide a new route to the study of dynamical large-deviation functions, and highlight the broad applicability of the neural-network state ansatz across domains in physics.
Very often when studying non-equilibrium systems one is interested in analysing dynamical behaviour that occurs with very low probability, so called rare events. In practice, since rare events are by definition atypical, they are often difficult to access in a statistically significant way. What are required are strategies to make rare events typical so that they can be generated on demand. Here we present such a general approach to adaptively construct a dynamics that efficiently samples atypical events. We do so by exploiting the methods of reinforcement learning (RL), which refers to the set of machine learning techniques aimed at finding the optimal behaviour to maximise a reward associated with the dynamics. We consider the general perspective of dynamical trajectory ensembles, whereby rare events are described in terms of ensemble reweighting. By minimising the distance between a reweighted ensemble and that of a suitably parametrised controlled dynamics we arrive at a set of methods similar to those of RL to numerically approximate the optimal dynamics that realises the rare behaviour of interest. As simple illustrations we consider in detail the problem of excursions of a random walker, for the case of rare events with a finite time horizon; and the problem of a studying current statistics of a particle hopping in a ring geometry, for the case of an infinite time horizon. We discuss natural extensions of the ideas presented here, including to continuous-time Markov systems, first passage time problems and non-Markovian dynamics.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا