No Arabic abstract
Neural computation is associated with the emergence, reconfiguration and dissolution of cell assemblies in the context of varying oscillatory states. Here, we describe the complex spatio-temporal dynamics of cell assemblies through temporal network formalism. We use a sliding window approach to extract sequences of networks of information sharing among single units in hippocampus and enthorinal cortex during anesthesia and study how global and node-wise functional connectivity properties evolve along time. First, we find that information sharing networks display, at any time, a core-periphery structure in which an integrated core of more tightly functionally interconnected units link to more loosely connected network leaves. However the units participating to the core or to the periphery substantially change across time-windows. Second, we find that discrete network states can be defined on top of this continuously ongoing liquid core-periphery reorganization. Switching between network states results in a more abrupt modification of the units belonging to the core and is only loosely linked to transitions between global oscillatory states. Third, we characterize different styles of temporal connectivity that cells can exhibit within each state of the sharing network. While inhibitory cells tend to be central, we show that, otherwise, anatomical localization only poorly influences the patterns of temporal connectivity of the different cells. Cells can also change temporal connectivity style when the network changes state. Altogether, these findings reveal that the sharing of information mediated by the intrinsic dynamics of hippocampal and enthorinal cortex cell assemblies have a rich spatiotemporal structure, which could not have been identified by more conventional time- or state-averaged analyses of functional connectivity.
A growing number of systems are represented as networks whose architecture conveys significant information and determines many of their properties. Examples of network architecture include modular, bipartite, and core-periphery structures. However inferring the network structure is a non trivial task and can depend sometimes on the chosen null model. Here we propose a method for classifying network structures and ranking its nodes in a statistically well-grounded fashion. The method is based on the use of Belief Propagation for learning through Entropy Maximization on both the Stochastic Block Model (SBM) and the degree-corrected Stochastic Block Model (dcSBM). As a specific application we show how the combined use of the two ensembles -SBM and dcSBM- allows to disentangle the bipartite and the core-periphery structure in the case of the e-MID interbank network. Specifically we find that, taking into account the degree, this interbank network is better described by a bipartite structure, while using the SBM the core-periphery structure emerges only when data are aggregated for more than a week.
Intermediate-scale (or `meso-scale) structures in networks have received considerable attention, as the algorithmic detection of such structures makes it possible to discover network features that are not apparent either at the local scale of nodes and edges or at the global scale of summary statistics. Numerous types of meso-scale structures can occur in networks, but investigations of such features have focused predominantly on the identification and study of community structure. In this paper, we develop a new method to investigate the meso-scale feature known as core-periphery structure, which entails identifying densely-connected core nodes and sparsely-connected periphery nodes. In contrast to communities, the nodes in a core are also reasonably well-connected to those in the periphery. Our new method of computing core-periphery structure can identify multiple cores in a network and takes different possible cores into account. We illustrate the differences between our method and several existing methods for identifying which nodes belong to a core, and we use our technique to examine core-periphery structure in examples of friendship, collaboration, transportation, and voting networks.
Thalamic relay cells fire action potentials that transmit information from retina to cortex. The amount of information that spike trains encode is usually estimated from the precision of spike timing with respect to the stimulus. Sensory input, however, is only one factor that influences neural activity. For example, intrinsic dynamics, such as oscillations of networks of neurons, also modulate firing pattern. Here, we asked if retinal oscillations might help to convey information to neurons downstream. Specifically, we made whole-cell recordings from relay cells to reveal retinal inputs (EPSPs) and thalamic outputs (spikes) and analyzed these events with information theory. Our results show that thalamic spike trains operate as two multiplexed channels. One channel, which occupies a low frequency band (<30 Hz), is encoded by average firing rate with respect to the stimulus and carries information about local changes in the image over time. The other operates in the gamma frequency band (40-80 Hz) and is encoded by spike time relative to the retinal oscillations. Because these oscillations involve extensive areas of the retina, it is likely that the second channel transmits information about global features of the visual scene. At times, the second channel conveyed even more information than the first.
As a person learns a new skill, distinct synapses, brain regions, and circuits are engaged and change over time. In this paper, we develop methods to examine patterns of correlated activity across a large set of brain regions. Our goal is to identify properties that enable robust learning of a motor skill. We measure brain activity during motor sequencing and characterize network properties based on coherent activity between brain regions. Using recently developed algorithms to detect time-evolving communities, we find that the complex reconfiguration patterns of the brains putative functional modules that control learning can be described parsimoniously by the combined presence of a relatively stiff temporal core that is composed primarily of sensorimotor and visual regions whose connectivity changes little in time and a flexible temporal periphery that is composed primarily of multimodal association regions whose connectivity changes frequently. The separation between temporal core and periphery changes over the course of training and, importantly, is a good predictor of individual differences in learning success. The core of dynamically stiff regions exhibits dense connectivity, which is consistent with notions of core-periphery organization established previously in social networks. Our results demonstrate that core-periphery organization provides an insightful way to understand how putative functional modules are linked. This, in turn, enables the prediction of fundamental human capacities, including the production of complex goal-directed behavior.
Core-periphery structure, the arrangement of a network into a dense core and sparse periphery, is a versatile descriptor of various social, biological, and technological networks. In practice, different core-periphery algorithms are often applied interchangeably, despite the fact that they can yield inconsistent descriptions of core-periphery structure. For example, two of the most widely used algorithms, the k-cores decomposition and the classic two-block model of Borgatti and Everett, extract fundamentally different structures: the latter partitions a network into a binary hub-and-spoke layout, while the former divides it into a layered hierarchy. We introduce a core-periphery typology to clarify these differences, along with Bayesian stochastic block modeling techniques to classify networks in accordance with this typology. Empirically, we find a rich diversity of core-periphery structure among networks. Through a detailed case study, we demonstrate the importance of acknowledging this diversity and situating networks within the core-periphery typology when conducting domain-specific analyses.