ترغب بنشر مسار تعليمي؟ اضغط هنا

Using spike train distances to identify the most discriminative neuronal subpopulation

83   0   0.0 ( 0 )
 نشر من قبل Eero Satuvuori
 تاريخ النشر 2018
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Background: Spike trains of multiple neurons can be analyzed following the summed population (SP) or the labeled line (LL) hypothesis. Responses to external stimuli are generated by a neuronal population as a whole or the individual neurons have encoding capacities of their own. The SPIKE-distance estimated either for a single, pooled spike train over a population or for each neuron separately can serve to quantify these responses. New Method: For the SP case we compare three algorithms that search for the most discriminative subpopulation over all stimulus pairs. For the LL case we introduce a new algorithm that combines neurons that individually separate different pairs of stimuli best. Results: The best approach for SP is a brute force search over all possible subpopulations. However, it is only feasible for small populations. For more realistic settings, simulated annealing clearly outperforms gradient algorithms with only a limited increase in computational load. Our novel LL approach can handle very involved coding scenarios despite its computational ease. Comparison with Existing Methods: Spike train distances have been extended to the analysis of neural populations interpolating between SP and LL coding. This includes parametrizing the importance of distinguishing spikes being fired in different neurons. Yet, these approaches only consider the population as a whole. The explicit focus on subpopulations render our algorithms complimentary. Conclusions: The spectrum of encoding possibilities in neural populations is broad. The SP and LL cases are two extremes for which our algorithms provide correct identification results.

قيم البحث

اقرأ أيضاً

79 - Shinsuke Koyama 2014
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a powe r function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship, and based on this a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Understanding information processing in the brain requires the ability to determine the functional connectivity between the different regions of the brain. We present a method using transfer entropy to extract this flow of information between brain r egions from spike-train data commonly obtained in neurological experiments. Transfer entropy is a statistical measure based in information theory that attempts to quantify the information flow from one process to another, and has been applied to find connectivity in simulated spike-train data. Due to statistical error in the estimator, inferring functional connectivity requires a method for determining significance in the transfer entropy values. We discuss the issues with numerical estimation of transfer entropy and resulting challenges in determining significance before presenting the trial-shuffle method as a viable option. The trial-shuffle method, for spike-train data that is split into multiple trials, determines significant transfer entropy values independently for each individual pair of neurons by comparing to a created baseline distribution using a rigorous statistical test. This is in contrast to either globally comparing all neuron transfer entropy values or comparing pairwise values to a single baseline value. In establishing the viability of this method by comparison to several alternative approaches in the literature, we find evidence that preserving the inter-spike-interval timing is important. We then use the trial-shuffle method to investigate information flow within a model network as we vary model parameters. This includes investigating the global flow of information within a connectivity network divided into two well-connected subnetworks, going beyond local transfer of information between pairs of neurons.
Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output conne ctivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modelling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to f ind significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.
A good understanding of how neurons use electrical pulses (i.e, spikes) to encode the signal information remains elusive. Analyzing spike sequences generated by individual neurons and by two coupled neurons (using the stochastic FitzHugh-Nagumo model ), recent theoretical studies have found that the relative timing of the spikes can encode the signal information. Using a symbolic method to analyze the spike sequence, preferred and infrequent spike patterns were detected, whose probabilities vary with both, the amplitude and the frequency of the signal. To investigate if this encoding mechanism is plausible also for neuronal ensembles, here we analyze the activity of a group of neurons, when they all perceive a weak periodic signal. We find that, as in the case of one or two coupled neurons, the probabilities of the spike patterns, now computed from the spike sequences of all the neurons, depend on the signals amplitude and period, and thus, the patterns probabilities encode the information of the signal. We also find that the resonances with the period of the signal or with the noise level are more pronounced when a group of neurons perceive the signal, in comparison with when only one or two coupled neurons perceive it. Neuronal coupling is beneficial for signal encoding as a group of neurons is able to encode a small-amplitude signal, which could not be encoded when it is perceived by just one or two coupled neurons. Interestingly, we find that for a group of neurons, just a few connections with one another can significantly improve the encoding of small-amplitude signals. Our findings indicate that information encoding in preferred and infrequent spike patterns is a plausible mechanism that can be employed by neuronal populations to encode weak periodic inputs, exploiting the presence of neural noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا