Do you want to publish a course? Click here

Capturing exponential variance using polynomial resources: applying tensor networks to non-equilibrium stochastic processes

135   0   0.0 ( 0 )
 Added by Tomi Johnson
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

Estimating the expected value of an observable appearing in a non-equilibrium stochastic process usually involves sampling. If the observables variance is high, many samples are required. In contrast, we show that performing the same task without sampling, using tensor network compression, efficiently captures high variances in systems of various geometries and dimensions. We provide examples for which matching the accuracy of our efficient method would require a sample size scaling exponentially with system size. In particular, the high variance observable $mathrm{e}^{-beta W}$, motivated by Jarzynskis equality, with $W$ the work done quenching from equilibrium at inverse temperature $beta$, is exactly and efficiently captured by tensor networks.



rate research

Read More

Motivated by the recent success of tensor networks to calculate the residual entropy of spin ice and kagome Ising models, we develop a general framework to study frustrated Ising models in terms of infinite tensor networks %, i.e. tensor networks that can be contracted using standard algorithms for infinite systems. This is achieved by reformulating the problem as local rules for configurations on overlapping clusters chosen in such a way that they relieve the frustration, i.e. that the energy can be minimized independently on each cluster. We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks, and we derive some basic rules and a linear program to implement them. We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions, vastly outperforming Monte Carlo methods in speed and accuracy. The extension to finite-temperature is briefly discussed.
We study numerically the behavior of RNA secondary structures under influence of a varying external force. This allows to measure the work $W$ during the resulting fast unfolding and refolding processes. Here, we investigate a medium-size hairpin structure. Using a sophisticated large-deviation algorithm, we are able to measure work distributions with high precision down to probabilities as small as $10^{-46}$. Due to this precision and by comparison with exact free-energy calculations we are able to verify the theorems of Crooks and Jarzynski. Furthermore, we analyze force-extension curves and the configurations of the secondary structures during unfolding and refolding for typical equilibrium processes and non-equilibrium processes, conditioned to selected values of the measured work $W$, typical and rare ones. We find that the non-equilibrium processes where the work values are close to those which are most relevant for applying Crooks and Jarzynski theorems, respectively, are most and quite similar to the equilibrium processes. Thus, a similarity of equilibrium and non-equilibrium behavior with respect to a mere scalar variable, which occurs with a very small probability but can be generated in a controlled but non-targeted way, is related to a high similarity for the set of configurations sampled along the full dynamical trajectory.
Quantifying how distinguishable two stochastic processes are lies at the heart of many fields, such as machine learning and quantitative finance. While several measures have been proposed for this task, none have universal applicability and ease of use. In this Letter, we suggest a set of requirements for a well-behaved measure of process distinguishability. Moreover, we propose a family of measures, called divergence rates, that satisfy all of these requirements. Focussing on a particular member of this family -- the co-emission divergence rate -- we show that it can be computed efficiently, behaves qualitatively similar to other commonly-used measures in their regimes of applicability, and remains well-behaved in scenarios where other measures break down.
132 - Yunxin Zhang 2018
In recent letter [Phys. Rev. Lett {bf 121}, 070601 (2018), arXiv:1802.06554], the speed limit for classical stochastic Markov processes is considered, and a trade-off inequality between the speed of the state transformation and the entropy production is given. In this comment, a more accurate inequality will be presented.
We investigate the standard deviation $delta v(tsamp)$ of the variance $v[xbf]$ of time series $xbf$ measured over a finite sampling time $tsamp$ focusing on non-ergodic systems where independent configurations $c$ get trapped in meta-basins of a generalized phase space. It is thus relevant in which order averages over the configurations $c$ and over time series $k$ of a configuration $c$ are performed. Three variances of $v[xbf_{ck}]$ must be distinguished: the total variance $dvtot = dvint + dvext$ and its contributions $dvint$, the typical internal variance within the meta-basins, and $dvext$, characterizing the dispersion between the different basins. We discuss simplifications for physical systems where the stochastic variable $x(t)$ is due to a density field averaged over a large system volume $V$. The relations are illustrated for the shear-stress fluctuations in quenched elastic networks and low-temperature glasses formed by polydisperse particles and free-standing polymer films. The different statistics of $svint$ and $svext$ are manifested by their different system-size dependence
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا