ترغب بنشر مسار تعليمي؟ اضغط هنا

Anomalous scaling of dynamical large deviations

159   0   0.0 ( 0 )
 نشر من قبل Hugo Touchette
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The typical values and fluctuations of time-integrated observables of nonequilibrium processes driven in steady states are known to be characterized by large deviation functions, generalizing the entropy and free energy to nonequilibrium systems. The definition of these functions involves a scaling limit, similar to the thermodynamic limit, in which the integration time $tau$ appears linearly, unless the process considered has long-range correlations, in which case $tau$ is generally replaced by $tau^xi$ with $xi eq 1$. Here we show that such an anomalous power-law scaling in time of large deviations can also arise without long-range correlations in Markovian processes as simple as the Langevin equation. We describe the mechanism underlying this scaling using path integrals and discuss its physical consequences for more general processes.

قيم البحث

اقرأ أيضاً

115 - Baruch Meerson 2019
Employing the optimal fluctuation method (OFM), we study the large deviation function of long-time averages $(1/T)int_{-T/2}^{T/2} x^n(t) dt$, $n=1,2, dots$, of centered stationary Gaussian processes. These processes are correlated and, in general, n on-Markovian. We show that the anomalous scaling with time of the large-deviation function, recently observed for $n>2$ for the particular case of the Ornstein-Uhlenbeck process, holds for a whole class of stationary Gaussian processes.
We show how to calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning. An agent, a stochastic model, propagates a continuous-time Monte Carlo trajectory and receives a reward conditioned upon the values of ce rtain path-extensive quantities. Evolution produces progressively fitter agents, eventually allowing the calculation of a piece of a large-deviation rate function for a particular model and path-extensive quantity. For models with small state spaces the evolutionary process acts directly on rates, and for models with large state spaces the process acts on the weights of a neural network that parameterizes the models rates. This approach shows how path-extensive physics problems can be considered within a framework widely used in machine learning.
The large deviation (LD) statistics of dynamical observables is encoded in the spectral properties of deformed Markov generators. Recent works have shown that tensor network methods are well suited to compute the relevant leading eigenvalues and eige nvectors accurately. However, the efficient generation of the corresponding rare trajectories is a harder task. Here we show how to exploit the MPS approximation of the dominant eigenvector to implement an efficient sampling scheme which closely resembles the optimal (so-called Doob) dynamics that realises the rare events. We demonstrate our approach on three well-studied lattice models, the Fredrickson-Andersen and East kinetically constrained models (KCMs), and the symmetric simple exclusion process (SSEP). We discuss how to generalise our approach to higher dimensions.
Chemical reaction networks offer a natural nonlinear generalisation of linear Markov jump processes on a finite state-space. In this paper, we analyse the dynamical large deviations of such models, starting from their microscopic version, the chemica l master equation. By taking a large-volume limit, we show that those systems can be described by a path integral formalism over a Lagrangian functional of concentrations and chemical fluxes. This Lagrangian is dual to a Hamiltonian, whose trajectories correspond to the most likely evolution of the system given its boundary conditions. The same can be done for a system biased on time-averaged concentrations and currents, yielding a biased Hamiltonian whose trajectories are optimal paths conditioned on those observables. The appropriate boundary conditions turn out to be mixed, so that, in the long time limit, those trajectories converge to well-defined attractors. We are then able to identify the largest value that the Hamiltonian takes over those attractors with the scaled cumulant generating function of our observables, providing a non-linear equivalent to the well-known Donsker-Varadhan formula for jump processes. On that basis, we prove that chemical reaction networks that are deterministically multistable generically undergo first-order dynamical phase transitions in the vicinity of zero bias. We illustrate that fact through a simple bistable model called the Schlogl model, as well as multistable and unstable generalisations of it, and we make a few surprising observations regarding the stability of deterministic fixed points, and the breaking of ergodicity in the large-volume limit.
For diffusive many-particle systems such as the SSEP (symmetric simple exclusion process) or independent particles coupled with reservoirs at the boundaries, we analyze the density fluctuations conditioned on current integrated over a large time. We determine the conditioned large deviation function of density by a microscopic calculation. We then show that it can be expressed in terms of the solutions of Hamilton-Jacobi equations, which can be written for general diffusive systems using a fluctuating hydrodynamics description.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا