No Arabic abstract
Synchronization is a widespread phenomenon observed in physical, biological, and social networks, which persists even under the influence of strong noise. Previous research on oscillators subject to common noise has shown that noise can actually facilitate synchronization, as correlations in the dynamics can be inherited from the noise itself. However, in many spatially distributed networks, such as the mammalian circadian system, the noise that different oscillators experience can be effectively uncorrelated. Here, we show that uncorrelated noise can in fact enhance synchronization when the oscillators are coupled. Strikingly, our analysis also shows that uncorrelated noise can be more effective than common noise in enhancing synchronization. We first establish these results theoretically for phase and phase-amplitude oscillators subject to either or both additive and multiplicative noise. We then confirm the predictions through experiments on coupled electrochemical oscillators. Our findings suggest that uncorrelated noise can promote rather than inhibit coherence in natural systems and that the same effect can be harnessed in engineered systems.
Natural and artificial networks, from the cerebral cortex to large-scale power grids, face the challenge of converting noisy inputs into robust signals. The input fluctuations often exhibit complex yet statistically reproducible correlations that reflect underlying internal or environmental processes such as synaptic noise or atmospheric turbulence. This raises the practically and biophysically relevant of question whether and how noise-filtering can be hard-wired directly into a networks architecture. By considering generic phase oscillator arrays under cost constraints, we explore here analytically and numerically the design, efficiency and topology of noise-canceling networks. Specifically, we find that when the input fluctuations become more correlated in space or time, optimal network architectures become sparser and more hierarchically organized, resembling the vasculature in plants or animals. More broadly, our results provide concrete guiding principles for designing more robust and efficient power grids and sensor networks.
The relation between network structure and dynamics is determinant for the behavior of complex systems in numerous domains. An important long-standing problem concerns the properties of the networks that optimize the dynamics with respect to a given performance measure. Here we show that such optimization can lead to sensitive dependence of the dynamics on the structure of the network. Specifically, using diffusively coupled systems as examples, we demonstrate that the stability of a dynamical state can exhibit sensitivity to unweighted structural perturbations (i.e., link removals and node additions) for undirected optimal networks and to weighted perturbations (i.e., small changes in link weights) for directed optimal networks. As mechanisms underlying this sensitivity, we identify discontinuous transitions occurring in the complement of undirected optimal networks and the prevalence of eigenvector degeneracy in directed optimal networks. These findings establish a unified characterization of networks optimized for dynamical stability, which we illustrate using Turing instability in activator-inhibitor systems, synchronization in power-grid networks, network diffusion, and several other network processes. Our results suggest that the network structure of a complex system operating near an optimum can potentially be fine-tuned for a significantly enhanced stability compared to what one might expect from simple extrapolation. On the other hand, they also suggest constraints on how close to the optimum the system can be in practice. Finally, the results have potential implications for biophysical networks, which have evolved under the competing pressures of optimizing fitness while remaining robust against perturbations.
Complex chemical reaction networks, which underlie many industrial and biological processes, often exhibit non-monotonic changes in chemical species concentrations, typically described using nonlinear models. Such non-monotonic dynamics are in principle possible even in linear models if the matrices defining the models are non-normal, as characterized by a necessarily non-orthogonal set of eigenvectors. However, the extent to which non-normality is responsible for non-monotonic behavior remains an open question. Here, using a master equation to model the reaction dynamics, we derive a general condition for observing non-monotonic dynamics of individual species, establishing that non-normality promotes non-monotonicity but is not a requirement for it. In contrast, we show that non-normality is a requirement for non-monotonic dynamics to be observed in the Renyi entropy. Using hydrogen combustion as an example application, we demonstrate that non-monotonic dynamics under experimental conditions are supported by a linear chain of connected components, in contrast with the dominance of a single giant component observed in typical random reaction networks. The exact linearity of the master equation enables development of rigorous theory and simulations for dynamical networks of unprecedented size (approaching $10^5$ dynamical variables, even for a network of only 20 reactions and involving less than 100 atoms). Our conclusions are expected to hold for other combustion processes, and the general theory we develop is applicable to all chemical reaction networks, including biological ones.
We study the statistical physics of a surprising phenomenon arising in large networks of excitable elements in response to noise: while at low noise, solutions remain in the vicinity of the resting state and large-noise solutions show asynchronous activity, the network displays orderly, perfectly synchronized periodic responses at intermediate level of noise. We show that this phenomenon is fundamentally stochastic and collective in nature. Indeed, for noise and coupling within specific ranges, an asymmetry in the transition rates between a resting and an excited regime progressively builds up, leading to an increase in the fraction of excited neurons eventually triggering a chain reaction associated with a macroscopic synchronized excursion and a collective return to rest where this process starts afresh, thus yielding the observed periodic synchronized oscillations. We further uncover a novel anti-resonance phenomenon: noise-induced synchronized oscillations disappear when the system is driven by periodic stimulation with frequency within a specific range. In that anti-resonance regime, the system is optimal for measures of information capacity. This observation provides a new hypothesis accounting for the efficiency of Deep Brain Stimulation therapies in Parkinsons disease, a neurodegenerative disease characterized by an increased synchronization of brain motor circuits. We further discuss the universality of these phenomena in the class of stochastic networks of excitable elements with confining coupling, and illustrate this universality by analyzing various classical models of neuronal networks. Altogether, these results uncover some universal mechanisms supporting a regularizing impact of noise in excitable systems, reveal a novel anti-resonance phenomenon in these systems, and propose a new hypothesis for the efficiency of high-frequency stimulation in Parkinsons disease.
We introduce a computational scheme for calculating the electronic structure of random alloys that includes electronic correlations within the framework of the combined density functional and dynamical mean-field theory. By making use of the particularly simple parameterization of the electron Greens function within the linearized muffin-tin orbitals method, we show that it is possible to greatly simplify the embedding of the self-energy. This in turn facilitates the implementation of the coherent potential approximation, which is used to model the substitutional disorder. The computational technique is tested on the Cu-Pd binary alloy system, and for disordered Mn-Ni interchange in the half-metallic NiMnSb.