ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Work Extraction and the Minimum Description Length Principle

120   0   0.0 ( 0 )
 نشر من قبل Matteo Marsili
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We discuss work extraction from classical information engines (e.g., Szilard) with $N$-particles, $q$ partitions, and initial arbitrary non-equilibrium states. In particular, we focus on their {em optimal} behaviour, which includes the measurement of a set of quantities $Phi$ with a feedback protocol that extracts the maximal average amount of work. We show that the optimal non-equilibrium state to which the engine should be driven before the measurement is given by the normalised maximum-likelihood probability distribution of a statistical model that admits $Phi$ as sufficient statistics. Furthermore, we show that the minimax universal code redundancy $mathcal{R}^*$ associated to this model, provides an upper bound to the work that the demon can extract on average from the cycle, in units of $k_{rm B}T$. We also find that, in the limit of $N$ large, the maximum average extracted work cannot exceed $H[Phi]/2$, i.e. one half times the Shannon entropy of the measurement. Our results establish a connection between optimal work extraction in stochastic thermodynamics and optimal universal data compression, providing design principles for optimal information engines. In particular, they suggest that: (i) optimal coding is thermodynamically efficient, and (ii) it is essential to drive the system into a critical state in order to achieve optimal performance.



قيم البحث

اقرأ أيضاً

This is an up-to-date introduction to and overview of the Minimum Description Length (MDL) Principle, a theory of inductive inference that can be applied to general problems in statistics, machine learning and pattern recognition. While MDL was origi nally based on data compression ideas, this introduction can be read without any knowledge thereof. It takes into account all major developments since 2007, the last time an extensive overview was written. These include new methods for model selection and averaging and hypothesis testing, as well as the first completely general definition of {em MDL estimators}. Incorporating these developments, MDL can be seen as a powerful extension of both penalized likelihood and Bayesian approaches, in which penalization functions and prior distributions are replaced by more general luckiness functions, average-case methodology is replaced by a more robust worst-case approach, and in which methods classically viewed as highly distinct, such as AIC vs BIC and cross-validation vs Bayes can, to a large extent, be viewed from a unified perspective.
We experimentally realize protocols that allow to extract work beyond the free energy difference from a single electron transistor at the single thermodynamic trajectory level. With two carefully designed out-of-equilibrium driving cycles featuring k icks of the control parameter, we demonstrate work extraction up to large fractions of $k_BT$ or with probabilities substantially greater than 1/2, despite zero free energy difference over the cycle. Our results are explained in the framework of nonequilibrium fluctuation relations. We thus show that irreversibility can be used as a resource for optimal work extraction even in the absence of feedback from an external operator.
Biological systems use energy to maintain non-equilibrium distributions for long times, e.g. of chemical concentrations or protein conformations. What are the fundamental limits of the power used to hold a stochastic system in a desired distribution over states? We study the setting of an uncontrolled Markov chain $Q$ altered into a controlled chain $P$ having a desired stationary distribution. Thermodynamics considerations lead to an appropriately defined Kullback-Leibler (KL) divergence rate $D(P||Q)$ as the cost of control, a setting introduced by Todorov, corresponding to a Markov decision process with mean log loss action cost. The optimal controlled chain $P^*$ minimizes the KL divergence rate $D(cdot||Q)$ subject to a stationary distribution constraint, and the minimal KL divergence rate lower bounds the power used. While this optimization problem is familiar from the large deviations literature, we offer a novel interpretation as a minimum holding cost and compute the minimizer $P^*$ more explicitly than previously available. We state a version of our results for both discrete- and continuous-time Markov chains, and find nice expressions for the important case of a reversible uncontrolled chain $Q$, for a two-state chain, and for birth-and-death processes.
For closed quantum systems driven away from equilibrium, work is often defined in terms of projective measurements of initial and final energies. This definition leads to statistical distributions of work that satisfy nonequilibrium work and fluctuat ion relations. While this two-point measurement definition of quantum work can be justified heuristically by appeal to the first law of thermodynamics, its relationship to the classical definition of work has not been carefully examined. In this paper we employ semiclassical methods, combined with numerical simulations of a driven quartic oscillator, to study the correspondence between classical and quantal definitions of work in systems with one degree of freedom. We find that a semiclassical work distribution, built from classical trajectories that connect the initial and final energies, provides an excellent approximation to the quantum work distribution when the trajectories are assigned suitable phases and are allowed to interfere. Neglecting the interferences between trajectories reduces the distribution to that of the corresponding classical process. Hence, in the semiclassical limit, the quantum work distribution converges to the classical distribution, decorated by a quantum interference pattern. We also derive the form of the quantum work distribution at the boundary between classically allowed and forbidden regions, where this distribution tunnels into the forbidden region. Our results clarify how the correspondence principle applies in the context of quantum and classical work distributions, and contribute to the understanding of work and nonequilibrium work relations in the quantum regime.
Work extraction from the Gibbs ensemble by a cyclic operation is impossible, as represented by the second law of thermodynamics. On the other hand, the eigenstate thermalization hypothesis (ETH) states that just a single energy eigenstate can describ e a thermal equilibrium state. Here we attempt to unify these two perspectives and investigate the second law at the level of individual energy eigenstates, by examining the possibility of extracting work from a single energy eigenstate. Specifically, we performed numerical exact diagonalization of a quench protocol of local Hamiltonians and evaluated the number of work-extractable energy eigenstates. We found that it becomes exactly zero in a finite system size, implying that a positive amount of work cannot be extracted from any energy eigenstate, if one or both of the pre- and the post-quench Hamiltonians are non-integrable. We argue that the mechanism behind this numerical observation is based on the ETH for a non-local observable. Our result implies that quantum chaos, characterized by non-integrability, leads to a stronger version of the second law than the conventional formulation based on the statistical ensembles.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا