ﻻ يوجد ملخص باللغة العربية
Estimating the expected value of an observable appearing in a non-equilibrium stochastic process usually involves sampling. If the observables variance is high, many samples are required. In contrast, we show that performing the same task without sampling, using tensor network compression, efficiently captures high variances in systems of various geometries and dimensions. We provide examples for which matching the accuracy of our efficient method would require a sample size scaling exponentially with system size. In particular, the high variance observable $mathrm{e}^{-beta W}$, motivated by Jarzynskis equality, with $W$ the work done quenching from equilibrium at inverse temperature $beta$, is exactly and efficiently captured by tensor networks.
Motivated by the recent success of tensor networks to calculate the residual entropy of spin ice and kagome Ising models, we develop a general framework to study frustrated Ising models in terms of infinite tensor networks %, i.e. tensor networks tha
We study numerically the behavior of RNA secondary structures under influence of a varying external force. This allows to measure the work $W$ during the resulting fast unfolding and refolding processes. Here, we investigate a medium-size hairpin str
Quantifying how distinguishable two stochastic processes are lies at the heart of many fields, such as machine learning and quantitative finance. While several measures have been proposed for this task, none have universal applicability and ease of u
In recent letter [Phys. Rev. Lett {bf 121}, 070601 (2018), arXiv:1802.06554], the speed limit for classical stochastic Markov processes is considered, and a trade-off inequality between the speed of the state transformation and the entropy production
We investigate the standard deviation $delta v(tsamp)$ of the variance $v[xbf]$ of time series $xbf$ measured over a finite sampling time $tsamp$ focusing on non-ergodic systems where independent configurations $c$ get trapped in meta-basins of a gen