Do you want to publish a course? Click here

Anomalous scaling of dynamical large deviations of stationary Gaussian processes

116   0   0.0 ( 0 )
 Added by Baruch Meerson
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

Employing the optimal fluctuation method (OFM), we study the large deviation function of long-time averages $(1/T)int_{-T/2}^{T/2} x^n(t) dt$, $n=1,2, dots$, of centered stationary Gaussian processes. These processes are correlated and, in general, non-Markovian. We show that the anomalous scaling with time of the large-deviation function, recently observed for $n>2$ for the particular case of the Ornstein-Uhlenbeck process, holds for a whole class of stationary Gaussian processes.



rate research

Read More

The typical values and fluctuations of time-integrated observables of nonequilibrium processes driven in steady states are known to be characterized by large deviation functions, generalizing the entropy and free energy to nonequilibrium systems. The definition of these functions involves a scaling limit, similar to the thermodynamic limit, in which the integration time $tau$ appears linearly, unless the process considered has long-range correlations, in which case $tau$ is generally replaced by $tau^xi$ with $xi eq 1$. Here we show that such an anomalous power-law scaling in time of large deviations can also arise without long-range correlations in Markovian processes as simple as the Langevin equation. We describe the mechanism underlying this scaling using path integrals and discuss its physical consequences for more general processes.
Simple models of irreversible dynamical processes such as Bootstrap Percolation have been successfully applied to describe cascade processes in a large variety of different contexts. However, the problem of analyzing non-typical trajectories, which can be crucial for the understanding of the out-of-equilibrium phenomena, is still considered to be intractable in most cases. Here we introduce an efficient method to find and analyze optimized trajectories of cascade processes. We show that for a wide class of irreversible dynamical rules, this problem can be solved efficiently on large-scale systems.
We show how to calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning. An agent, a stochastic model, propagates a continuous-time Monte Carlo trajectory and receives a reward conditioned upon the values of certain path-extensive quantities. Evolution produces progressively fitter agents, eventually allowing the calculation of a piece of a large-deviation rate function for a particular model and path-extensive quantity. For models with small state spaces the evolutionary process acts directly on rates, and for models with large state spaces the process acts on the weights of a neural network that parameterizes the models rates. This approach shows how path-extensive physics problems can be considered within a framework widely used in machine learning.
The persistence of a stochastic variable is the probability that it does not cross a given level during a fixed time interval. Although persistence is a simple concept to understand, it is in general hard to calculate. Here we consider zero mean Gaussian stationary processes in discrete time $n$. Few results are known for the persistence $P_0(n)$ in discrete time, except the large time behavior which is characterized by the nontrivial constant $theta$ through $P_0(n)sim theta^n$. Using a modified version of the Independent Interval Approximation (IIA) that we developed before, we are able to calculate $P_0(n)$ analytically in $z$-transform space in terms of the autocorrelation function $A(n)$. If $A(n)to0$ as $ntoinfty$, we extract $theta$ numerically, while if $A(n)=0$, for finite $n>N$, we find $theta$ exactly (within the IIA). We apply our results to three special cases: the nearest neighbor-correlated first order moving average process where $A(n)=0$ for $ n>1$, the double exponential-correlated second order autoregressive process where $A(n)=c_1lambda_1^n+c_2lambda_2^n$, and power law-correlated variables where $A(n)sim n^{-mu}$. Apart from the power-law case when $mu<5$, we find excellent agreement with simulations.
The large deviation (LD) statistics of dynamical observables is encoded in the spectral properties of deformed Markov generators. Recent works have shown that tensor network methods are well suited to compute the relevant leading eigenvalues and eigenvectors accurately. However, the efficient generation of the corresponding rare trajectories is a harder task. Here we show how to exploit the MPS approximation of the dominant eigenvector to implement an efficient sampling scheme which closely resembles the optimal (so-called Doob) dynamics that realises the rare events. We demonstrate our approach on three well-studied lattice models, the Fredrickson-Andersen and East kinetically constrained models (KCMs), and the symmetric simple exclusion process (SSEP). We discuss how to generalise our approach to higher dimensions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا