ترغب بنشر مسار تعليمي؟ اضغط هنا

Exponential distance distribution of connected neurons in simulations of two-dimensional in vitro neural network development

181   0   0.0 ( 0 )
 نشر من قبل Yanjun Wang Dr.
 تاريخ النشر 2017
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The distribution of the geometric distances of connected neurons is a practical factor underlying neural networks in the brain. It can affect the brains dynamic properties at the ground level. Karbowski derived a power-law decay distribution that has not yet been verified by experiment. In this work, we check its validity using simulations with a phenomenological model. Based on the in vitro two-dimensional development of neural networks in culture vessels by Ito, we match the synapse number saturation time to obtain suitable parameters for the development process, then determine the distribution of distances between connected neurons under such conditions. Our simulations obtain a clear exponential distribution instead of a power-law one, which indicates that Karbowskis conclusion is invalid, at least for the case of in vitro neural network development in two-dimensional culture vessels.



قيم البحث

اقرأ أيضاً

During development, the mammalian brain differentiates into specialized regions with distinct functional abilities. While many factors contribute to functional specialization, we explore the effect of neuronal density on the development of neuronal i nteractions in vitro. Two types of cortical networks, dense and sparse, with 50,000 and 12,000 total cells respectively, are studied. Activation graphs that represent pairwise neuronal interactions are constructed using a competitive first response model. These graphs reveal that, during development in vitro, dense networks form activation connections earlier than sparse networks. Link entropy analysis of dense net- work activation graphs suggests that the majority of connections between electrodes are reciprocal in nature. Information theoretic measures reveal that early functional information interactions (among 3 cells) are synergetic in both dense and sparse networks. However, during later stages of development, previously synergetic relationships become primarily redundant in dense, but not in sparse networks. Large link entropy values in the activation graph are related to the domination of redundant ensembles in late stages of development in dense networks. Results demonstrate differences between dense and sparse networks in terms of informational groups, pairwise relationships, and activation graphs. These differences suggest that variations in cell density may result in different functional specialization of nervous system tissue in vivo.
Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such K-pairwise models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the populations capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
96 - H. Sebastian Seung 2018
A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper derives the network from two normative principles that are mathematically equivalent but conceptually different. The first principle formulates unsupervised learning as a constrained optimization problem: maximization of S-E correlations subject to a copositivity constraint on E-E correlations. A combination of Legendre and Lagrangian duality yields a zero-sum continuous game between excitatory and inhibitory connections that is solved by the neural network. The second principle defines a zero-sum game between E and I cells. E cells want to maximize S-E correlations and minimize E-I correlations, while I cells want to maximize I-E correlations and minimize power. The conflict between I and E objectives effectively forces the E cells to decorrelate from each other, although only incompletely. Legendre duality yields the neural network.
In this study, we have investigated factors of determination which can affect the connected structure of a stock network. The representative index for topological properties of a stock network is the number of links with other stocks. We used the mul ti-factor model, extensively acknowledged in financial literature. In the multi-factor model, common factors act as independent variables while returns of individual stocks act as dependent variables. We calculated the coefficient of determination, which represents the measurement value of the degree in which dependent variables are explained by independent variables. Therefore, we investigated the relationship between the number of links in the stock network and the coefficient of determination in the multi-factor model. We used individual stocks traded on the market indices of Korea, Japan, Canada, Italy and the UK. The results are as follows. We found that the mean coefficient of determination of stocks with a large number of links have higher values than those with a small number of links with other stocks. These results suggest that common factors are significantly deterministic factors to be taken into account when making a stock network. Furthermore, stocks with a large number of links to other stocks can be more affected by common factors.
Correlations in sensory neural networks have both extrinsic and intrinsic origins. Extrinsic or stimulus correlations arise from shared inputs to the network, and thus depend strongly on the stimulus ensemble. Intrinsic or noise correlations reflect biophysical mechanisms of interactions between neurons, which are expected to be robust to changes of the stimulus ensemble. Despite the importance of this distinction for understanding how sensory networks encode information collectively, no method exists to reliably separate intrinsic interactions from extrinsic correlations in neural activity data, limiting our ability to build predictive models of the network response. In this paper we introduce a general strategy to infer {population models of interacting neurons that collectively encode stimulus information}. The key to disentangling intrinsic from extrinsic correlations is to infer the {couplings between neurons} separately from the encoding model, and to combine the two using corrections calculated in a mean-field approximation. We demonstrate the effectiveness of this approach on retinal recordings. The same coupling network is inferred from responses to radically different stimulus ensembles, showing that these couplings indeed reflect stimulus-independent interactions between neurons. The inferred model predicts accurately the collective response of retinal ganglion cell populations as a function of the stimulus.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا