ترغب بنشر مسار تعليمي؟ اضغط هنا

The connectome, or the entire connectivity of a neural system represented by network, ranges various scales from synaptic connections between individual neurons to fibre tract connections between brain regions. Although the modularity they commonly s how has been extensively studied, it is unclear whether connection specificity of such networks can already be fully explained by the modularity alone. To answer this question, we study two networks, the neuronal network of C. elegans and the fibre tract network of human brains yielded through diffusion spectrum imaging (DSI). We compare them to their respective benchmark networks with varying modularities, which are generated by link swapping to have desired modularity values but otherwise maximally random. We find several network properties that are specific to the neural networks and cannot be fully explained by the modularity alone. First, the clustering coefficient and the characteristic path length of C. elegans and human connectomes are both higher than those of the benchmark networks with similar modularity. High clustering coefficient indicates efficient local information distribution and high characteristic path length suggests reduced global integration. Second, the total wiring length is smaller than for the alternative configurations with similar modularity. This is due to lower dispersion of connections, which means each neuron in C. elegans connectome or each region of interest (ROI) in human connectome reaches fewer ganglia or cortical areas, respectively. Third, both neural networks show lower algorithmic entropy compared to the alternative arrangements. This implies that fewer rules are needed to encode for the organisation of neural systems.
Local field potentials (LFPs) sampled with extracellular electrodes are frequently used as a measure of population neuronal activity. However, relating such measurements to underlying neuronal behaviour and connectivity is non-trivial. To help study this link, we developed the Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX). We first identified a reduced neuron model that retained the spatial and frequency filtering characteristics of extracellular potentials from neocortical neurons. We then developed VERTEX as an easy-to-use Matlab tool for simulating LFPs from large populations (>100 000 neurons). A VERTEX-based simulation successfully reproduced features of the LFPs from an in vitro multi-electrode array recording of macaque neocortical tissue. Our model, with virtual electrodes placed anywhere in 3D, allows direct comparisons with the in vitro recording setup. We envisage that VERTEX will stimulate experimentalists, clinicians, and computational neuroscientists to use models to understand the mechanisms underlying measured brain dynamics in health and disease.
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brains network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of r andom graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.
Predicting brain maturity using noninvasive magnetic resonance images (MRI) can distinguish different age groups and help to assess neurodevelopmental disorders. However, group-wise differences are often less informative for assessing features of ind ividuals. Here, we propose a simple method to predict the age of an individual subject solely based on structural connectivity data from diffusion tensor imaging (DTI). Our simple predictor computed a weighted sum of the strength of all connections of an individual. The weight consists of the fiber strength, given by the number of streamlines following tract tracing, multiplied by the importance of that connection for an observed feature--age in this case. We tested this approach using DTI data from 121 healthy subjects aged 4 to 85 years. After determining importance in a training dataset, our predicted ages in the test dataset showed a strong correlation (rho = 0.77) with real age deviating by, on average, only 10 years.
An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable regimes of network activation, typically arising from a limited parameter range. In t his range of limited sustained activity (LSA), the activity of neural populations in the network persists between the extremes of either quickly dying out or activating the whole network. Hierarchical modular networks were previously found to show a wider parameter range for LSA than random or small-world networks not possessing hierarchical organization or multiple modules. Here we explored how variation in the number of hierarchical levels and modules per level influenced network dynamics and occurrence of LSA. We tested hierarchical configurations of different network sizes, approximating the large-scale networks linking cortical columns in one hemisphere of the rat, cat, or macaque monkey brain. Scaling of the network size affected the number of hierarchical levels and modules in the optimal networks, also depending on whether global edge density or the numbers of connections per node were kept constant. For constant edge density, only few network configurations, possessing an intermediate number of levels and a large number of modules, led to a large range of LSA independent of brain size. For a constant number of node connections, there was a trend for optimal configurations in larger-size networks to possess a larger number of hierarchical levels or more modules. These results may help to explain the trend to greater network complexity apparent in larger brains and may indicate that this complexity is required for maintaining stable levels of neural activation.
Neural connectivity at the cellular and mesoscopic level appears very specific and is presumed to arise from highly specific developmental mechanisms. However, there are general shared features of connectivity in systems as different as the networks formed by individual neurons in Caenorhabditis elegans or in rat visual cortex and the mesoscopic circuitry of cortical areas in the mouse, macaque, and human brain. In all these systems, connection length distributions have very similar shapes, with an initial large peak and a long flat tail representing the admixture of long-distance connections to mostly short-distance connections. Furthermore, not all potentially possible synapses are formed, and only a fraction of axons (called filling fraction) establish synapses with spatially neighboring neurons. We explored what aspects of these connectivity patterns can be explained simply by random axonal outgrowth. We found that random axonal growth away from the soma can already reproduce the known distance distribution of connections. We also observed that experimentally observed filling fractions can be generated by competition for available space at the target neurons--a model markedly different from previous explanations. These findings may serve as a baseline model for the development of connectivity that can be further refined by more specific mechanisms.
An essential requirement for the representation of functional patterns in complex neural networks, such as the mammalian cerebral cortex, is the existence of stable network activations within a limited critical range. In this range, the activity of n eural populations in the network persists between the extremes of quickly dying out, or activating the whole network. The nerve fiber network of the mammalian cerebral cortex possesses a modular organization extending across several levels of organization. Using a basic spreading model without inhibition, we investigated how functional activations of nodes propagate through such a hierarchically clustered network. The simulations demonstrated that persistent and scalable activation could be produced in clustered networks, but not in random networks of the same size. Moreover, the parameter range yielding critical activations was substantially larger in hierarchical cluster networks than in small-world networks of the same size. These findings indicate that a hierarchical cluster architecture may provide the structural basis for the stable and diverse functional patterns observed in cortical networks.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا