No Arabic abstract
The emergence of syntax during childhood is a remarkable example of how complex correlations unfold in nonlinear ways through development. In particular, rapid transitions seem to occur as children reach the age of two, which seems to separate a two-word, tree-like network of syntactic relations among words from a scale-free graphs associated to the adult, complex grammar. Here we explore the evolution of syntax networks through language acquisition using the {em chromatic number}, which captures the transition and provides a natural link to standard theories on syntactic structures. The data analysis is compared to a null model of network growth dynamics which is shown to display nontrivial and sensible differences. In a more general level, we observe that the chromatic classes define independent regions of the graph, and thus, can be interpreted as the footprints of incompatibility relations, somewhat as opposed to modularity considerations.
The ability to store continuous variables in the state of a biological system (e.g. a neural network) is critical for many behaviours. Most models for implementing such a memory manifold require hand-crafted symmetries in the interactions or precise fine-tuning of parameters. We present a general principle that we refer to as {it frozen stabilisation}, which allows a family of neural networks to self-organise to a critical state exhibiting memory manifolds without parameter fine-tuning or symmetries. These memory manifolds exhibit a true continuum of memory states and can be used as general purpose integrators for inputs aligned with the manifold. Moreover, frozen stabilisation allows robust memory manifolds in small networks, and this is relevant to debates of implementing continuous attractors with a small number of neurons in light of recent experimental discoveries.
The collective dynamics of a network of excitable nodes changes dramatically when inhibitory nodes are introduced. We consider inhibitory nodes which may be activated just like excitatory nodes but, upon activating, decrease the probability of activation of network neighbors. We show that, although the direct effect of inhibitory nodes is to decrease activity, the collective dynamics becomes self-sustaining. We explain this counterintuitive result by defining and analyzing a branching function which may be thought of as an activity-dependent branching ratio. The shape of the branching function implies that for a range of global coupling parameters dynamics are self-sustaining. Within the self-sustaining region of parameter space lies a critical line along which dynamics take the form of avalanches with universal scaling of size and duration, embedded in ceaseless timeseries of activity. Our analyses, confirmed by numerical simulation, suggest that inhibition may play a counterintuitive role in excitable networks.
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to $simeq 10^6$ nodes and $simeq 10^8$ edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension $D$ and the small-world coefficient $sigma$ of these networks. While $sigma$ suggests a small-world topology, we found that $D < 4$ showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
We study the storage of multiple phase-coded patterns as stable dynamical attractors in recurrent neural networks with sparse connectivity. To determine the synaptic strength of existent connections and store the phase-coded patterns, we introduce a learning rule inspired to the spike-timing dependent plasticity (STDP). We find that, after learning, the spontaneous dynamics of the network replay one of the stored dynamical patterns, depending on the network initialization. We study the network capacity as a function of topology, and find that a small- world-like topology may be optimal, as a compromise between the high wiring cost of long range connections and the capacity increase.
In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of the neural activity, as expected, but it can also promote neural reactivation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neurons death). However, the random pruning of the connections is able to reverse the action of inhibition, i.e. in a sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of the neurons (neurons rebirth). Thus the number of firing neurons reveals a minimum at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by the neurons with higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving an analytic mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, the system passes from a perfectly regular evolution to an irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.