Do you want to publish a course? Click here

Maximum Kolmogorov-Sinai entropy vs minimum mixing time in Markov chains

132   0   0.0 ( 0 )
 Added by Martin Mihelich
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

Many modern techniques employed in physics, such a computation of path integrals, rely on random walks on graphs that can be represented as Markov chains. Traditionally, estimates of running times of such sampling algorithms are computed using the number of steps in the chain needed to reach the stationary distribution. This quantity is generally defined as mixing time and is often difficult to compute. In this paper, we suggest an alternative estimate based on the Kolmogorov-Sinai entropy, by establishing a link between the maximization of KSE and the minimization of the mixing time. Since KSE are easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time that could be interesting in computer sciences and statistical physics. Beyond this, our finding will also be of interest to the out-of-equilibrium community, by providing a new rational to select stationary states in out-of-equilibrium physics: it seems reasonable that in a physical system with two simultaneous equiprobable possible dynamics, the final stationary state will be closer to the stationary state corresponding to the fastest dynamics (smallest mixing time).Through the empirical link found in this letter, this state will correspond to a state of maximal Kolmogorov-Sinai entropy. If this is true, this would provide a more satisfying rule for selecting stationary states in complex systems such as climate than the maximization of the entropy production.



rate research

Read More

We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov-Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov-Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at _rst order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP (N) tends towards a non-zero value, while fmaxKS (N) tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N_ such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium uxes imposed to the boundaries.
We analyse and interpret the effects of breaking detailed balance on the convergence to equilibrium of conservative interacting particle systems and their hydrodynamic scaling limits. For finite systems of interacting particles, we review existing results showing that irreversible processes converge faster to their steady state than reversible ones. We show how this behaviour appears in the hydrodynamic limit of such processes, as described by macroscopic fluctuation theory, and we provide a quantitative expression for the acceleration of convergence in this setting. We give a geometrical interpretation of this acceleration, in terms of currents that are emph{antisymmetric} under time-reversal and orthogonal to the free energy gradient, which act to drive the system away from states where (reversible) gradient-descent dynamics result in slow convergence to equilibrium.
Motivated by robotic surveillance applications, this paper studies the novel problem of maximizing the return time entropy of a Markov chain, subject to a graph topology with travel times and stationary distribution. The return time entropy is the weighted average, over all graph nodes, of the entropy of the first return times of the Markov chain; this objective function is a function series that does not admit in general a closed form. The paper features theoretical and computational contributions. First, we obtain a discrete-time delayed linear system for the return time probability distribution and establish its convergence properties. We show that the objective function is continuous over a compact set and therefore admits a global maximum; a unique globally-optimal solution is known only for complete graphs with unitary travel times. We then establish upper and lower bounds between the return time entropy and the well-known entropy rate of the Markov chain. To compute the optimal Markov chain numerically, we establish the asymptotic equality between entropy, conditional entropy and truncated entropy, and propose an iteration to compute the gradient of the truncated entropy. Finally, we apply these results to the robotic surveillance problem. Our numerical results show that, for a model of rational intruder over prototypical graph topologies and test cases, the maximum return time entropy chain performs better than several existing Markov chains.
We study the entropy production in non-equilibrium quantum systems without dissipation, which is generated exclusively by the spontaneous breaking of time-reversal invariance. Systems which preserve the total energy and particle number and are in contact with two heat reservoirs are analysed. Focussing on point-like interactions, we derive the probability distribution induced by the entropy production operator. We show that all its moments are positive in the zero frequency limit. The analysis covers both Fermi and Bose statistics.
We use the kinetic theory of gases to compute the Kolmogorov-Sinai entropy per particle for a dilute gas in equilibrium. For an equilibrium system, the KS entropy, h_KS is the sum of all of the positive Lyapunov exponents characterizing the chaotic behavior of the gas. We compute h_KS/N, where N is the number of particles in the gas. This quantity has a density expansion of the form h_KS/N = a u[-ln{tilde{n}} + b + O(tilde{n})], where u is the single-particle collision frequency and tilde{n} is the reduced number density of the gas. The theoretical values for the coefficients a and b are compared with the results of computer simulations, with excellent agreement for a, and less than satisfactory agreement for b. Possible reasons for this difference in b are discussed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا