No Arabic abstract
The activity of a sparse network of leaky integrate-and-fire neurons is carefully revisited with reference to a regime of a bona-fide asynchronous dynamics. The study is preceded by a finite-size scaling analysis, carried out to identify a setup where collective synchronization is negligible. The comparison between quenched and annealed networks reveals the emergence of substantial differences when the coupling strength is increased, via a scenario somehow reminiscent of a phase transition. For sufficiently strong synaptic coupling, quenched networks exhibit a highly bursting neural activity, well reproduced by a self-consistent approach, based on the assumption that the input synaptic current is the superposition of independent renewal processes. The distribution of interspike intervals turns out to be relatively long-tailed; a crucial feature required for the self-sustainment of the bursting activity in a regime where neurons operate on average (much) below threshold. A semi-quantitative analogy with Ornstein-Uhlenbeck processes helps validating this interpretation. Finally, an alternative explanation in terms of Poisson processes is offered under the additional assumption of mutual correlations among excitatory and inhibitory spikes.
We investigate the possibility that narrowband oscillations may emerge from completely asynchronous, independent neural firing. We find that a population of asynchronous neurons may produce narrowband oscillations if each neuron fires quasi-periodically, and we deduce bounds on the degree of variability in neural spike-timing which will permit the emergence of such oscillations. These results suggest a novel mechanism of neural rhythmogenesis, and they help to explain recent experimental reports of large-amplitude local field potential oscillations in the absence of neural spike-timing synchrony. Simply put, although synchrony can produce oscillations, oscillations do not always imply the existence of synchrony.
Neurons modeled by the Rulkov map display a variety of dynamic regimes that include tonic spikes and chaotic bursting. Here we study an ensemble of bursting neurons coupled with the Watts-Strogatz small-world topology. We characterize the sequences of bursts using the symbolic method of time-series analysis known as ordinal analysis, which detects nonlinear temporal correlations. We show that the probabilities of the different symbols distinguish different dynamical regimes, which depend on the coupling strength and the network topology. These regimes have different spatio-temporal properties that can be visualized with raster plots.
The brain can be understood as a collection of interacting neuronal oscillators, but the extent to which its sustained activity is due to coupling among brain areas is still unclear. Here we study the joint dynamics of two cortical columns described by Jansen-Rit neural mass models, and show that coupling between the columns gives rise to stochastic initiations of sustained collective activity, which can be interpreted as epileptic events. For large enough coupling strengths, termination of these events results mainly from the emergence of synchronization between the columns, and thus is controlled by coupling instead of noise. Stochastic triggering and noise-independent durations are characteristic of excitable dynamics, and thus we interpret our results in terms of collective excitability.
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brains network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity, while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.