ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimum Power to Maintain a Nonequilibrium Distribution of a Markov Chain

140   0   0.0 ( 0 )
 نشر من قبل Yihui Quek
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Biological systems use energy to maintain non-equilibrium distributions for long times, e.g. of chemical concentrations or protein conformations. What are the fundamental limits of the power used to hold a stochastic system in a desired distribution over states? We study the setting of an uncontrolled Markov chain $Q$ altered into a controlled chain $P$ having a desired stationary distribution. Thermodynamics considerations lead to an appropriately defined Kullback-Leibler (KL) divergence rate $D(P||Q)$ as the cost of control, a setting introduced by Todorov, corresponding to a Markov decision process with mean log loss action cost. The optimal controlled chain $P^*$ minimizes the KL divergence rate $D(cdot||Q)$ subject to a stationary distribution constraint, and the minimal KL divergence rate lower bounds the power used. While this optimization problem is familiar from the large deviations literature, we offer a novel interpretation as a minimum holding cost and compute the minimizer $P^*$ more explicitly than previously available. We state a version of our results for both discrete- and continuous-time Markov chains, and find nice expressions for the important case of a reversible uncontrolled chain $Q$, for a two-state chain, and for birth-and-death processes.



قيم البحث

اقرأ أيضاً

We introduce an efficient nonreversible Markov chain Monte Carlo algorithm to generate self-avoiding walks with a variable endpoint. In two dimensions, the new algorithm slightly outperforms the two-move nonreversible Berretti-Sokal algorithm introdu ced by H.~Hu, X.~Chen, and Y.~Deng in cite{old}, while for three-dimensional walks, it is 3--5 times faster. The new algorithm introduces nonreversible Markov chains that obey global balance and allows for three types of elementary moves on the existing self-avoiding walk: shorten, extend or alter conformation without changing the walks length.
139 - De Tao Mao , Yisheng Zhong 2009
To describe and analyze the dynamics of Self-Organized Criticality (SOC) systems, a four-state continuous-time Markov model is proposed in this paper. Different to computer simulation or numeric experimental approaches commonly employed for explainin g the power law in SOC, in this paper, based on this Markov model, using E.T.Jayness Maximum Entropy method, we have derived a mathematical proof on the power law distribution for the size of these events. Both this Makov model and the mathematical proof on power law present a new angle on the universality of power law distributions, they also show that the scale free property exists not necessary only in SOC system, but in a class of dynamical systems which can be modelled by the proposed Markov model.
We discuss work extraction from classical information engines (e.g., Szilard) with $N$-particles, $q$ partitions, and initial arbitrary non-equilibrium states. In particular, we focus on their {em optimal} behaviour, which includes the measurement of a set of quantities $Phi$ with a feedback protocol that extracts the maximal average amount of work. We show that the optimal non-equilibrium state to which the engine should be driven before the measurement is given by the normalised maximum-likelihood probability distribution of a statistical model that admits $Phi$ as sufficient statistics. Furthermore, we show that the minimax universal code redundancy $mathcal{R}^*$ associated to this model, provides an upper bound to the work that the demon can extract on average from the cycle, in units of $k_{rm B}T$. We also find that, in the limit of $N$ large, the maximum average extracted work cannot exceed $H[Phi]/2$, i.e. one half times the Shannon entropy of the measurement. Our results establish a connection between optimal work extraction in stochastic thermodynamics and optimal universal data compression, providing design principles for optimal information engines. In particular, they suggest that: (i) optimal coding is thermodynamically efficient, and (ii) it is essential to drive the system into a critical state in order to achieve optimal performance.
We develop a simple algorithm to parallelize generation processes of Markov chains. In this algorithm, multiple Markov chains are generated in parallel and jointed together to make a longer Markov chain. The joints between the constituent Markov chai ns are processed using the detailed balance. We apply the parallelization algorithm to multicanonical calculations of the two-dimensional Ising model and demonstrate accurate estimation of multicanonical weights.
272 - P. Gaspard , D. Andrieux 2014
We report a theoretical study of stochastic processes modeling the growth of first-order Markov copolymers, as well as the reversed reaction of depolymerization. These processes are ruled by kinetic equations describing both the attachment and detach ment of monomers. Exact solutions are obtained for these kinetic equations in the steady regimes of multicomponent copolymerization and depolymerization. Thermodynamic equilibrium is identified as the state at which the growth velocity is vanishing on average and where detailed balance is satisfied. Away from equilibrium, the analytical expression of the thermodynamic entropy production is deduced in terms of the Shannon disorder per monomer in the copolymer sequence. The Mayo-Lewis equation is recovered in the fully irreversible growth regime. The theory also applies to Bernoullian chains in the case where the attachment and detachment rates only depend on the reacting monomer.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا