Do you want to publish a course? Click here

Fluctuation scaling in neural spike trains

92   0   0.0 ( 0 )
 Added by Shinsuke Koyama
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

Fluctuation scaling has been observed universally in a wide variety of phenomena. In time series that describe sequences of events, fluctuation scaling is expressed as power function relationships between the mean and variance of either inter-event intervals or counting statistics, depending on measurement variables. In this article, fluctuation scaling has been formulated for a series of events in which scaling laws in the inter-event intervals and counting statistics were related. We have considered the first-passage time of an Ornstein-Uhlenbeck process and used a conductance-based neuron model with excitatory and inhibitory synaptic inputs to demonstrate the emergence of fluctuation scaling with various exponents, depending on the input regimes and the ratio between excitation and inhibition. Furthermore, we have discussed the possible implication of these results in the context of neural coding.

rate research

Read More

247 - Shinsuke Koyama 2013
The fluctuation scaling law has universally been observed in a wide variety of phenomena. For counting processes describing the number of events occurred during time intervals, it is expressed as a power function relationship between the variance and the mean of the event count per unit time, the characteristic exponent of which is obtained theoretically in the limit of long duration of counting windows. Here I show that the scaling law effectively appears even in a short timescale in which only a few events occur. Consequently, the counting statistics of nonstationary event sequences are shown to exhibit the scaling law as well as the dynamics at temporal resolution of this timescale. I also propose a method to extract in a systematic manner the characteristic scaling exponent from nonstationary data.
Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, which can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters, but can reproduce the observed variability in retinal recordings in various conditions. We show analytically why this approximation can work. In a model of the spike emitting process where a refractory period is assumed, we derive that our simple correction can well approximate the spike train statistics over a broad range of firing rates. Our model can be easily plugged to stimulus processing models, like Linear-nonlinear model or its generalizations, to replace the Poisson spike train hypothesis that is commonly assumed. It estimates the amount of information transmitted much more accurately than Poisson models in retinal recordings. Thanks to its simplicity this model has the potential to explain low variability in other areas.
We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : Reading out the code consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ? A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.
Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike trains structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
110 - Zhi Chen 2001
Detrended fluctuation analysis (DFA) is a scaling analysis method used to quantify long-range power-law correlations in signals. Many physical and biological signals are ``noisy, heterogeneous and exhibit different types of nonstationarities, which can affect the correlation properties of these signals. We systematically study the effects of three types of nonstationarities often encountered in real data. Specifically, we consider nonstationary sequences formed in three ways: (i) stitching together segments of data obtained from discontinuous experimental recordings, or removing some noisy and unreliable parts from continuous recordings and stitching together the remaining parts -- a ``cutting procedure commonly used in preparing data prior to signal analysis; (ii) adding to a signal with known correlations a tunable concentration of random outliers or spikes with different amplitude, and (iii) generating a signal comprised of segments with different properties -- e.g. different standard deviations or different correlation exponents. We compare the difference between the scaling results obtained for stationary correlated signals and correlated signals with these three types of nonstationarities.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا