Do you want to publish a course? Click here

Clique of functional hubs orchestrates population bursts in developmentally regulated neural networks

151   0   0.0 ( 0 )
 Publication date 2014
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate



rate research

Read More

We consider a sparse random network of excitatory leaky integrate-and-fire neurons with short-term synaptic depression. Furthermore to mimic the dynamics of a brain circuit in its first stages of development we introduce for each neuron correlations among in-degree and out-degree as well as among excitability and the corresponding total degree, We analyze the influence of single neuron stimulation and deletion on the collective dynamics of the network. We show the existence of a small group of neurons capable of controlling and even silencing the bursting activity of the network. These neurons form a functional clique since only their activation in a precise order and within specific time windows is capable to ignite population bursts.
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to $simeq 10^6$ nodes and $simeq 10^8$ edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension $D$ and the small-world coefficient $sigma$ of these networks. While $sigma$ suggests a small-world topology, we found that $D < 4$ showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
We study the storage of multiple phase-coded patterns as stable dynamical attractors in recurrent neural networks with sparse connectivity. To determine the synaptic strength of existent connections and store the phase-coded patterns, we introduce a learning rule inspired to the spike-timing dependent plasticity (STDP). We find that, after learning, the spontaneous dynamics of the network replay one of the stored dynamical patterns, depending on the network initialization. We study the network capacity as a function of topology, and find that a small- world-like topology may be optimal, as a compromise between the high wiring cost of long range connections and the capacity increase.
In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of the neural activity, as expected, but it can also promote neural reactivation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neurons death). However, the random pruning of the connections is able to reverse the action of inhibition, i.e. in a sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of the neurons (neurons rebirth). Thus the number of firing neurons reveals a minimum at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by the neurons with higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving an analytic mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, the system passes from a perfectly regular evolution to an irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.
An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In this range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show a wider parameter range for LSA than random or small-world networks not possessing hierarchical organization or multiple modules. Here we explored how variation in the number of hierarchical levels and modules per level influenced network dynamics and occurrence of LSA. We tested hierarchical configurations of different network sizes, approximating the large-scale networks linking cortical columns in one hemisphere of the rat, cat, or macaque monkey brain. Scaling of the network size affected the number of hierarchical levels and modules in the optimal networks, also depending on whether global edge density or the numbers of connections per node were kept constant. For constant edge density, only few network configurations, possessing an intermediate number of levels and a large number of modules, led to a large range of LSA independent of brain size. For a constant number of node connections, there was a trend for optimal configurations in larger-size networks to possess a larger number of hierarchical levels or more modules. These results may help to explain the trend to greater network complexity apparent in larger brains and may indicate that this complexity is required for maintaining stable levels of neural activation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا