No Arabic abstract
Random Boolean networks are models of disordered causal systems that can occur in cells and the biosphere. These are open thermodynamic systems exhibiting a flow of energy that is dissipated at a finite rate. Life does work to acquire more energy, then uses the available energy it has gained to perform more work. It is plausible that natural selection has optimized many biological systems for power efficiency: useful power generated per unit fuel. In this letter we begin to investigate these questions for random Boolean networks using Landauers erasure principle, which defines a minimum entropy cost for bit erasure. We show that critical Boolean networks maximize available power efficiency, which requires that the system have a finite displacement from equilibrium. Our initial results may extend to more realistic models for cells and ecosystems.
The amount of mutual information contained in time series of two elements gives a measure of how well their activities are coordinated. In a large, complex network of interacting elements, such as a genetic regulatory network within a cell, the average of the mutual information over all pairs <I> is a global measure of how well the system can coordinate its internal dynamics. We study this average pairwise mutual information in random Boolean networks (RBNs) as a function of the distribution of Boolean rules implemented at each element, assuming that the links in the network are randomly placed. Efficient numerical methods for calculating <I> show that as the number of network nodes N approaches infinity, the quantity N<I> exhibits a discontinuity at parameter values corresponding to critical RBNs. For finite systems it peaks near the critical value, but slightly in the disordered regime for typical parameter variations. The source of high values of N<I> is the indirect correlations between pairs of elements from different long chains with a common starting point. The contribution from pairs that are directly linked approaches zero for critical networks and peaks deep in the disordered regime.
Despite their apparent simplicity, random Boolean networks display a rich variety of dynamical behaviors. Much work has been focused on the properties and abundance of attractors. The topologies of random Boolean networks with one input per node can be seen as graphs of random maps. We introduce an approach to investigating random maps and finding analytical results for attractors in random Boolean networks with the corresponding topology. Approximating some other non-chaotic networks to be of this class, we apply the analytic results to them. For this approximation, we observe a strikingly good agreement on the numbers of attractors of various lengths. We also investigate observables related to the average number of attractors in relation to the typical number of attractors. Here, we find strong differences that highlight the difficulties in making direct comparisons between random Boolean networks and real systems. Furthermore, we demonstrate the power of our approach by deriving some results for random maps. These results include the distribution of the number of components in random maps, along with asymptotic expansions for cumulants up to the 4th order.
We study the stable attractors of a class of continuous dynamical systems that may be idealized as networks of Boolean elements, with the goal of determining which Boolean attractors, if any, are good approximations of the attractors of generic continuous systems. We investigate the dynamics in simple rings and rings with one additional self-input. An analysis of switching characteristics and pulse propagation explains the relation between attractors of the continuous systems and their Boolean approximations. For simple rings, reliable Boolean attractors correspond to stable continuous attractors. For networks with more complex logic, the qualitative features of continuous attractors are influenced by inherently non-Boolean characteristics of switching events.
We study information processing in populations of Boolean networks with evolving connectivity and systematically explore the interplay between the learning capability, robustness, the network topology, and the task complexity. We solve a long-standing open question and find computationally that, for large system sizes $N$, adaptive information processing drives the networks to a critical connectivity $K_{c}=2$. For finite size networks, the connectivity approaches the critical value with a power-law of the system size $N$. We show that network learning and generalization are optimized near criticality, given task complexity and the amount of information provided threshold values. Both random and evolved networks exhibit maximal topological diversity near $K_{c}$. We hypothesize that this supports efficient exploration and robustness of solutions. Also reflected in our observation is that the variance of the values is maximal in critical network populations. Finally, we discuss implications of our results for determining the optimal topology of adaptive dynamical networks that solve computational tasks.
Molecular motors transduce chemical energy obtained from hydrolizing ATP into mechanical work exerted against an external force. We calculate their efficiency at maximum power output for two simple generic models and show that the qualitative behaviour depends crucially on the position of the transition state. Specifically, we find a transition state near the initial state (sometimes characterized as a power stroke) to be most favorable with respect to both high power output and high efficiency at maximum power. In this regime, driving the motor further out of equilibrium by applying higher chemical potential differences can even, counter-intuitively, increase the efficiency.