No Arabic abstract
Assuming time-scale separation, a simple and unified theory of thermodynamics and stochastic thermodynamics is constructed for small classical systems strongly interacting with its environment in a controllable fashion. The total Hamiltonian is decomposed into a bath part and a system part, the latter being the Hamiltonian of mean force. Both the conditional equilibrium of bath and the reduced equilibrium of the system are described by canonical ensemble theories with respect to their own Hamiltonians. The bath free energy is independent of the system variables and the control parameter. Furthermore, the weak coupling theory of stochastic thermodynamics becomes applicable almost verbatim, even if the interaction and correlation between the system and its environment are strong and varied externally. Finally, this TSS-based approach also leads to some new insights about the origin of the second law of thermodynamics.
One of the major resource requirements of computers - ranging from biological cells to human brains to high-performance (engineered) computers - is the energy used to run them. Those costs of performing a computation have long been a focus of research in physics, going back to the early work of Landauer. One of the most prominent aspects of computers is that they are inherently nonequilibrium systems. However, the early research was done when nonequilibrium statistical physics was in its infancy, which meant the work was formulated in terms of equilibrium statistical physics. Since then there have been major breakthroughs in nonequilibrium statistical physics, which are allowing us to investigate the myriad aspects of the relationship between statistical physics and computation, extending well beyond the issue of how much work is required to erase a bit. In this paper I review some of this recent work on the `stochastic thermodynamics of computation. After reviewing the salient parts of information theory, computer science theory, and stochastic thermodynamics, I summarize what has been learned about the entropic costs of performing a broad range of computations, extending from bit erasure to loop-free circuits to logically reversible circuits to information ratchets to Turing machines. These results reveal new, challenging engineering problems for how to design computers to have minimal thermodynamic costs. They also allow us to start to combine computer science theory and stochastic thermodynamics at a foundational level, thereby expanding both.
The ribosome is one of the largest and most complex macromolecular machines in living cells. It polymerizes a protein in a step-by-step manner as directed by the corresponding nucleotide sequence on the template messenger RNA (mRNA) and this process is referred to as `translation of the genetic message encoded in the sequence of mRNA transcript. In each successful chemo-mechanical cycle during the (protein) elongation stage, the ribosome elongates the protein by a single subunit, called amino acid, and steps forward on the template mRNA by three nucleotides called a codon. Therefore, a ribosome is also regarded as a molecular motor for which the mRNA serves as the track, its step size is that of a codon and two molecules of GTP and one molecule of ATP hydrolyzed in that cycle serve as its fuel. What adds further complexity is the existence of competing pathways leading to distinct cycles, branched pathways in each cycle and futile consumption of fuel that leads neither to elongation of the nascent protein nor forward stepping of the ribosome on its track. We investigate a model formulated in terms of the network of discrete chemo-mechanical states of a ribosome during the elongation stage of translation. The model is analyzed using a combination of stochastic thermodynamic and kinetic analysis based on a graph-theoretic approach. We derive the exact solution of the corresponding master equations. We represent the steady state in terms of the cycles of the underlying network and discuss the energy transduction processes. We identify the various possible modes of operation of a ribosome in terms of its average velocity and mean rate of GTP hydrolysis. We also compute entropy production as functions of the rates of the interstate transitions and the thermodynamic cost for accuracy of the translation process.
We apply the stochastic thermodynamics formalism to describe the dynamics of systems of complex Langevin and Fokker-Planck equations. We provide in particular a simple and general recipe to calculate thermodynamical currents, dissipated and propagating heat for networks of nonlinear oscillators. By using the Hodge decomposition of thermodynamical forces and fluxes, we derive a formula for entropy production that generalises the notion of non-potential forces and makes trans- parent the breaking of detailed balance and of time reversal symmetry for states arbitrarily far from equilibrium. Our formalism is then applied to describe the off-equilibrium thermodynamics of a few examples, notably a continuum ferromagnet, a network of classical spin-oscillators and the Frenkel-Kontorova model of nano friction.
A fluctuation relation is derived to extract the order parameter function $q(x)$ in weakly ergodic systems. The relation is based on measuring and classifying entropy production fluctuations according to the value of the overlap $q$ between configurations. For a fixed value of $q$, entropy production fluctuations are Gaussian distributed allowing us to derive the quasi-FDT so characteristic of aging systems. The theory is validated by extracting the $q(x)$ in various types of glassy models. It might be generally applicable to other nonequilibrium systems and experimental small systems.
I give a quick overview of some of the theoretical background necessary for using modern non-equilibrium statistical physics to investigate the thermodynamics of computation. I first present some of the necessary concepts from information theory, and then introduce some of the most important types of computational machine considered in computer science theory. After this I present a central result from modern non-equilibrium statistical physics: an exact expression for the entropy flow out of a system undergoing a given dynamics with a given initial distribution over states. This central expression is crucial for analyzing how the total entropy flow out of a computer depends on its global structure, since that global structure determines the initial distributions into all of the computers subsystems, and therefore (via the central expression) the entropy flows generated by all of those subsystems. I illustrate these results by analyzing some of the subtleties concerning the benefits that are sometimes claimed for implementing an irreversible computation with a reversible circuit constructed out of Fredkin gates.