Do you want to publish a course? Click here

Anomalous scaling of dynamical large deviations

159   0   0.0 ( 0 )
 Added by Hugo Touchette
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

The typical values and fluctuations of time-integrated observables of nonequilibrium processes driven in steady states are known to be characterized by large deviation functions, generalizing the entropy and free energy to nonequilibrium systems. The definition of these functions involves a scaling limit, similar to the thermodynamic limit, in which the integration time $tau$ appears linearly, unless the process considered has long-range correlations, in which case $tau$ is generally replaced by $tau^xi$ with $xi eq 1$. Here we show that such an anomalous power-law scaling in time of large deviations can also arise without long-range correlations in Markovian processes as simple as the Langevin equation. We describe the mechanism underlying this scaling using path integrals and discuss its physical consequences for more general processes.



rate research

Read More

115 - Baruch Meerson 2019
Employing the optimal fluctuation method (OFM), we study the large deviation function of long-time averages $(1/T)int_{-T/2}^{T/2} x^n(t) dt$, $n=1,2, dots$, of centered stationary Gaussian processes. These processes are correlated and, in general, non-Markovian. We show that the anomalous scaling with time of the large-deviation function, recently observed for $n>2$ for the particular case of the Ornstein-Uhlenbeck process, holds for a whole class of stationary Gaussian processes.
We show how to calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning. An agent, a stochastic model, propagates a continuous-time Monte Carlo trajectory and receives a reward conditioned upon the values of certain path-extensive quantities. Evolution produces progressively fitter agents, eventually allowing the calculation of a piece of a large-deviation rate function for a particular model and path-extensive quantity. For models with small state spaces the evolutionary process acts directly on rates, and for models with large state spaces the process acts on the weights of a neural network that parameterizes the models rates. This approach shows how path-extensive physics problems can be considered within a framework widely used in machine learning.
The large deviation (LD) statistics of dynamical observables is encoded in the spectral properties of deformed Markov generators. Recent works have shown that tensor network methods are well suited to compute the relevant leading eigenvalues and eigenvectors accurately. However, the efficient generation of the corresponding rare trajectories is a harder task. Here we show how to exploit the MPS approximation of the dominant eigenvector to implement an efficient sampling scheme which closely resembles the optimal (so-called Doob) dynamics that realises the rare events. We demonstrate our approach on three well-studied lattice models, the Fredrickson-Andersen and East kinetically constrained models (KCMs), and the symmetric simple exclusion process (SSEP). We discuss how to generalise our approach to higher dimensions.
Chemical reaction networks offer a natural nonlinear generalisation of linear Markov jump processes on a finite state-space. In this paper, we analyse the dynamical large deviations of such models, starting from their microscopic version, the chemical master equation. By taking a large-volume limit, we show that those systems can be described by a path integral formalism over a Lagrangian functional of concentrations and chemical fluxes. This Lagrangian is dual to a Hamiltonian, whose trajectories correspond to the most likely evolution of the system given its boundary conditions. The same can be done for a system biased on time-averaged concentrations and currents, yielding a biased Hamiltonian whose trajectories are optimal paths conditioned on those observables. The appropriate boundary conditions turn out to be mixed, so that, in the long time limit, those trajectories converge to well-defined attractors. We are then able to identify the largest value that the Hamiltonian takes over those attractors with the scaled cumulant generating function of our observables, providing a non-linear equivalent to the well-known Donsker-Varadhan formula for jump processes. On that basis, we prove that chemical reaction networks that are deterministically multistable generically undergo first-order dynamical phase transitions in the vicinity of zero bias. We illustrate that fact through a simple bistable model called the Schlogl model, as well as multistable and unstable generalisations of it, and we make a few surprising observations regarding the stability of deterministic fixed points, and the breaking of ergodicity in the large-volume limit.
For diffusive many-particle systems such as the SSEP (symmetric simple exclusion process) or independent particles coupled with reservoirs at the boundaries, we analyze the density fluctuations conditioned on current integrated over a large time. We determine the conditioned large deviation function of density by a microscopic calculation. We then show that it can be expressed in terms of the solutions of Hamilton-Jacobi equations, which can be written for general diffusive systems using a fluctuating hydrodynamics description.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا