No Arabic abstract
Functional networks provide a topological description of activity patterns in the brain, as they stem from the propagation of neural activity on the underlying anatomical or structural network of synaptic connections. This latter is well known to be organized in hierarchical and modular way. While it is assumed that structural networks shape their functional counterparts, it is also hypothesized that alterations of brain dynamics come with transformations of functional connectivity. In this computational study, we introduce a novel methodology to monitor the persistence and breakdown of hierarchical order in functional networks, generated from computational models of activity spreading on both synthetic and real structural connectomes. We show that hierarchical connectivity appears in functional networks in a persistent way if the dynamics is set to be in the quasi-critical regime associated with optimal processing capabilities and normal brain function, while it breaks down in other (supercritical) dynamical regimes, often associated with pathological conditions. Our results offer important clues for the study of optimal neurocomputing architectures and processes, which are capable of controlling patterns of activity and information flow. We conclude that functional connectivity patterns achieve optimal balance between local specialized processing (i.e. segregation) and global integration by inheriting the hierarchical organization of the underlying structural architecture.
Although most networks in nature exhibit complex topology the origins of such complexity remains unclear. We introduce a model of a growing network of interacting agents in which each new agents membership to the network is determined by the agents effect on the networks global stability. It is shown that out of this stability constraint, scale free networks emerges in a self organized manner, offering an explanation for the ubiquity of complex topological properties observed in biological networks.
We investigate site percolation in a hierarchical scale-free network known as the Dorogovtsev- Goltsev-Mendes network. We use the generating function method to show that the percolation threshold is 1, i.e., the system is not in the percolating phase when the occupation probability is less than 1. The present result is contrasted to bond percolation in the same network of which the percolation threshold is zero. We also show that the percolation threshold of intentional attacks is 1. Our results suggest that this hierarchical scale-free network is very fragile against both random failure and intentional attacks. Such a structural defect is common in many hierarchical network models.
Groups of firms often achieve a competitive advantage through the formation of geo-industrial clusters. Although many exemplary clusters, such as Hollywood or Silicon Valley, have been frequently studied, systematic approaches to identify and analyze the hierarchical structure of the geo-industrial clusters at the global scale are rare. In this work, we use LinkedIns employment histories of more than 500 million users over 25 years to construct a labor flow network of over 4 million firms across the world and apply a recursive network community detection algorithm to reveal the hierarchical structure of geo-industrial clusters. We show that the resulting geo-industrial clusters exhibit a stronger association between the influx of educated-workers and financial performance, compared to existing aggregation units. Furthermore, our additional analysis of the skill sets of educated-workers supplements the relationship between the labor flow of educated-workers and productivity growth. We argue that geo-industrial clusters defined by labor flow provide better insights into the growth and the decline of the economy than other common economic units.
A brief review is given on the study of the thermodynamic properties of spin models defined on different topologies like small-world, scale-free networks, random graphs and regular and random lattices. Ising, Potts and Blume-Capel models are considered. They are defined on complex lattices comprising Appolonian, Barabasi-Albert, Voronoi-Delauny and small-world networks. The main emphasis is given on the corresponding phase transitions, transition temperatures, critical exponents and universality, compared to those obtained by the same models on regular Bravais lattices.
Generalized linear models are one of the most efficient paradigms for predicting the correlated stochastic activity of neuronal networks in response to external stimuli, with applications in many brain areas. However, when dealing with complex stimuli, the inferred coupling parameters often do not generalize across different stimulus statistics, leading to degraded performance and blowup instabilities. Here, we develop a two-step inference strategy that allows us to train robust generalized linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step. Applying this approach to the responses of retinal ganglion cells to complex visual stimuli, we show that, compared to classical methods, the models trained in this way exhibit improved performance, are more stable, yield robust interaction networks, and generalize well across complex visual statistics. The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations.