Do you want to publish a course? Click here

Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding

109   0   0.0 ( 0 )
 Added by Siavash Ghavami
 Publication date 2018
  fields Biology
and research's language is English




Ask ChatGPT about the research

The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to find significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.



rate research

Read More

Our mysterious brain is believed to operate near a non-equilibrium point and generate critical self-organized avalanches in neuronal activity. Recent experimental evidence has revealed significant heterogeneity in both synaptic input and output connectivity, but whether the structural heterogeneity participates in the regulation of neuronal avalanches remains poorly understood. By computational modelling, we predict that different types of structural heterogeneity contribute distinct effects on avalanche neurodynamics. In particular, neuronal avalanches can be triggered at an intermediate level of input heterogeneity, but heterogeneous output connectivity cannot evoke avalanche dynamics. In the criticality region, the co-emergence of multi-scale cortical activities is observed, and both the avalanche dynamics and neuronal oscillations are modulated by the input heterogeneity. Remarkably, we show similar results can be reproduced in networks with various types of in- and out-degree distributions. Overall, these findings not only provide details on the underlying circuitry mechanisms of nonrandom synaptic connectivity in the regulation of neuronal avalanches, but also inspire testable hypotheses for future experimental studies.
Background: Spike trains of multiple neurons can be analyzed following the summed population (SP) or the labeled line (LL) hypothesis. Responses to external stimuli are generated by a neuronal population as a whole or the individual neurons have encoding capacities of their own. The SPIKE-distance estimated either for a single, pooled spike train over a population or for each neuron separately can serve to quantify these responses. New Method: For the SP case we compare three algorithms that search for the most discriminative subpopulation over all stimulus pairs. For the LL case we introduce a new algorithm that combines neurons that individually separate different pairs of stimuli best. Results: The best approach for SP is a brute force search over all possible subpopulations. However, it is only feasible for small populations. For more realistic settings, simulated annealing clearly outperforms gradient algorithms with only a limited increase in computational load. Our novel LL approach can handle very involved coding scenarios despite its computational ease. Comparison with Existing Methods: Spike train distances have been extended to the analysis of neural populations interpolating between SP and LL coding. This includes parametrizing the importance of distinguishing spikes being fired in different neurons. Yet, these approaches only consider the population as a whole. The explicit focus on subpopulations render our algorithms complimentary. Conclusions: The spectrum of encoding possibilities in neural populations is broad. The SP and LL cases are two extremes for which our algorithms provide correct identification results.
A good understanding of how neurons use electrical pulses (i.e, spikes) to encode the signal information remains elusive. Analyzing spike sequences generated by individual neurons and by two coupled neurons (using the stochastic FitzHugh-Nagumo model), recent theoretical studies have found that the relative timing of the spikes can encode the signal information. Using a symbolic method to analyze the spike sequence, preferred and infrequent spike patterns were detected, whose probabilities vary with both, the amplitude and the frequency of the signal. To investigate if this encoding mechanism is plausible also for neuronal ensembles, here we analyze the activity of a group of neurons, when they all perceive a weak periodic signal. We find that, as in the case of one or two coupled neurons, the probabilities of the spike patterns, now computed from the spike sequences of all the neurons, depend on the signals amplitude and period, and thus, the patterns probabilities encode the information of the signal. We also find that the resonances with the period of the signal or with the noise level are more pronounced when a group of neurons perceive the signal, in comparison with when only one or two coupled neurons perceive it. Neuronal coupling is beneficial for signal encoding as a group of neurons is able to encode a small-amplitude signal, which could not be encoded when it is perceived by just one or two coupled neurons. Interestingly, we find that for a group of neurons, just a few connections with one another can significantly improve the encoding of small-amplitude signals. Our findings indicate that information encoding in preferred and infrequent spike patterns is a plausible mechanism that can be employed by neuronal populations to encode weak periodic inputs, exploiting the presence of neural noise.
138 - Laurent Perrinet 2009
If modern computers are sometimes superior to humans in some specialized tasks such as playing chess or browsing a large database, they cant beat the efficiency of biological vision for such simple tasks as recognizing and following an object in a complex cluttered background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this on static natural images by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we will apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will in particular focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
Developing electrophysiological recordings of brain neuronal activity and their analysis provide a basis for exploring the structure of brain function and nervous system investigation. The recorded signals are typically a combination of spikes and noise. High amounts of background noise and possibility of electric signaling recording from several neurons adjacent to the recording site have led scientists to develop neuronal signal processing tools such as spike sorting to facilitate brain data analysis. Spike sorting plays a pivotal role in understanding the electrophysiological activity of neuronal networks. This process prepares recorded data for interpretations of neurons interactions and understanding the overall structure of brain functions. Spike sorting consists of three steps: spike detection, feature extraction, and spike clustering. There are several methods to implement each of spike sorting steps. This paper provides a systematic comparison of various spike sorting sub-techniques applied to real extracellularly recorded data from a rat brain basolateral amygdala. An efficient sorted data resulted from careful choice of spike sorting sub-methods leads to better interpretation of the brain structures connectivity under different conditions, which is a very sensitive concept in diagnosis and treatment of neurological disorders. Here, spike detection is performed by appropriate choice of threshold level via three different approaches. Feature extraction is done through PCA and Kernel PCA methods, which Kernel PCA outperforms. We have applied four different algorithms for spike clustering including K-means, Fuzzy C-means, Bayesian and Fuzzy maximum likelihood estimation. As one requirement of most clustering algorithms, optimal number of clusters is achieved through validity indices for each method. Finally, the sorting results are evaluated using inter-spike interval histograms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا