Do you want to publish a course? Click here

A simple model for low variability in neural spike trains

157   0   0.0 ( 0 )
 Added by Ulisse Ferrari
 Publication date 2018
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, which can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters, but can reproduce the observed variability in retinal recordings in various conditions. We show analytically why this approximation can work. In a model of the spike emitting process where a refractory period is assumed, we derive that our simple correction can well approximate the spike train statistics over a broad range of firing rates. Our model can be easily plugged to stimulus processing models, like Linear-nonlinear model or its generalizations, to replace the Poisson spike train hypothesis that is commonly assumed. It estimates the amount of information transmitted much more accurately than Poisson models in retinal recordings. Thanks to its simplicity this model has the potential to explain low variability in other areas.



rate research

Read More

Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike trains structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
Recently Segev et al. (Phys. Rev. E 64,2001, Phys.Rev.Let. 88, 2002) made long-term observations of spontaneous activity of in-vitro cortical networks, which differ from predictions of current models in many features. In this paper we generalize the EI cortical model introduced in a previous paper (S.Scarpetta et al. Neural Comput. 14, 2002), including intrinsic white noise and analyzing effects of noise on the spontaneous activity of the nonlinear system, in order to account for the experimental results of Segev et al.. Analytically we can distinguish different regimes of activity, depending from the model parameters. Using analytical results as a guide line, we perform simulations of the nonlinear stochastic model in two different regimes, B and C. The Power Spectrum Density (PSD) of the activity and the Inter-Event-Interval (IEI) distributions are computed, and compared with experimental results. In regime B the network shows stochastic resonance phenomena and noise induces aperiodic collective synchronous oscillations that mimic experimental observations at 0.5 mM Ca concentration. In regime C the model shows spontaneous synchronous periodic activity that mimic activity observed at 1 mM Ca concentration and the PSD shows two peaks at the 1st and 2nd harmonics in agreement with experiments at 1 mM Ca. Moreover (due to intrinsic noise and nonlinear activation function effects) the PSD shows a broad band peak at low frequency. This feature, observed experimentally, does not find explanation in the previous models. Besides we identify parametric changes (namely increase of noise or decreasing of excitatory connections) that reproduces the fading of periodicity found experimentally at long times, and we identify a way to discriminate between those two possible effects measuring experimentally the low frequency PSD.
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments $textit{in vitro}$ do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
Fluctuation scaling has been observed universally in a wide variety of phenomena. In time series that describe sequences of events, fluctuation scaling is expressed as power function relationships between the mean and variance of either inter-event intervals or counting statistics, depending on measurement variables. In this article, fluctuation scaling has been formulated for a series of events in which scaling laws in the inter-event intervals and counting statistics were related. We have considered the first-passage time of an Ornstein-Uhlenbeck process and used a conductance-based neuron model with excitatory and inhibitory synaptic inputs to demonstrate the emergence of fluctuation scaling with various exponents, depending on the input regimes and the ratio between excitation and inhibition. Furthermore, we have discussed the possible implication of these results in the context of neural coding.
We show that a model of the hippocampus introduced recently by Scarpetta, Zhaoping & Hertz ([2002] Neural Computation 14(10):2371-96), explains the theta phase precession phenomena. In our model, the theta phase precession comes out as a consequence of the associative-memory-like network dynamics, i.e. the networks ability to imprint and recall oscillatory patterns, coded both by phases and amplitudes of oscillation. The learning rule used to imprint the oscillatory states is a natural generalization of that used for static patterns in the Hopfield model, and is based on the spike time dependent synaptic plasticity (STDP), experimentally observed. In agreement with experimental findings, the place cells activity appears at consistently earlier phases of subsequent cycles of the ongoing theta rhythm during a pass through the place field, while the oscillation amplitude of the place cells firing rate increases as the animal approaches the center of the place field and decreases as the animal leaves the center. The total phase precession of the place cell is lower than 360 degrees, in agreement with experiments. As the animal enters a receptive field the place cells activity comes slightly less than 180 degrees after the phase of maximal pyramidal cell population activity, in agreement with the findings of Skaggs et al (1996). Our model predicts that the theta phase is much better correlated with location than with time spent in the receptive field. Finally, in agreement with the recent experimental findings of Zugaro et al (2005), our model predicts that theta phase precession persists after transient intra-hippocampal perturbation.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا