Do you want to publish a course? Click here

Entropy-based parametric estimation of spike train statistics

224   0   0.0 ( 0 )
 Added by Bruno. Cessac
 Publication date 2010
  fields Physics Biology
and research's language is English




Ask ChatGPT about the research

We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : Reading out the code consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ? A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.



rate research

Read More

387 - I. Grabec 2007
Redundancy of experimental data is the basic statistic from which the complexity of a natural phenomenon and the proper number of experiments needed for its exploration can be estimated. The redundancy is expressed by the entropy of information pertaining to the probability density function of experimental variables. Since the calculation of entropy is inconvenient due to integration over a range of variables, an approximate expression for redundancy is derived that includes only a sum over the set of experimental data about these variables. The approximation makes feasible an efficient estimation of the redundancy of data along with the related experimental information and information cost function. From the experimental information the complexity of the phenomenon can be simply estimated, while the proper number of experiments needed for its exploration can be determined from the minimum of the cost function. The performance of the approximate estimation of these statistics is demonstrated on two-dimensional normally distributed random data.
Langevin models are frequently used to model various stochastic processes in different fields of natural and social sciences. They are adapted to measured data by estimation techniques such as maximum likelihood estimation, Markov chain Monte Carlo methods, or the non-parametric direct estimation method introduced by Friedrich et al. The latter has the distinction of being very effective in the context of large data sets. Due to their $delta$-correlated noise, standard Langevin models are limited to Markovian dynamics. A non-Markovian Langevin model can be formulated by introducing a hidden component that realizes correlated noise. For the estimation of such a partially observed diffusion a different version of the direct estimation method was introduced by Lehle et al. However, this procedure includes the limitation that the correlation length of the noise component is small compared to that of the measured component. In this work we propose another version of the direct estimation method that does not include this restriction. Via this method it is possible to deal with large data sets of a wider range of examples in an effective way. We discuss the abilities of the proposed procedure using several synthetic examples.
We extracted and processed abstract data from the SFN annual meeting abstracts during the period 2001-2006, using techniques and software from natural language processing, database management, and data visualization and analysis. An important first step in the process was the application of data cleaning and disambiguation methods to construct a unified database, since the data were too noisy to be of full utility in the raw form initially available. The resulting co-author graph in 2006, for example, had 39,645 nodes (with an estimated 6% error rate in our disambiguation of similar author names) and 13,979 abstracts, with an average of 1.5 abstracts per author, 4.3 authors per abstract, and 5.96 collaborators per author (including all authors on shared abstracts). Recent work in related areas has focused on reputational indices such as highly cited papers or scientists and journal impact factors, and to a lesser extent on creating visual maps of the knowledge space. In contrast, there has been relatively less work on the demographics and community structure, the dynamics of the field over time to examine major research trends and the structure of the sources of research funding. In this paper we examined each of these areas in order to gain an objective overview of contemporary neuroscience. Some interesting findings include a high geographical concentration of neuroscience research in north eastern United States, a surprisingly large transient population (60% of the authors appear in only one out of the six studied years), the central role played by the study of neurodegenerative disorders in the neuroscience community structure, and an apparent growth of behavioral/systems neuroscience with a corresponding shrinkage of cellular/molecular neuroscience over the six year period.
Symbolic methods of analysis are valuable tools for investigating complex time-dependent signals. In particular, the ordinal method defines sequences of symbols according to the ordering in which values appear in a time series. This method has been shown to yield useful information, even when applied to signals with large noise contamination. Here we use ordinal analysis to investigate the transition between eyes closed (EC) and eyes open (EO) resting states. We analyze two {EEG} datasets (with 71 and 109 healthy subjects) with different recording conditions (sampling rates and the number of electrodes in the scalp). Using as diagnostic tools the permutation entropy, the entropy computed from symbolic transition probabilities, and an asymmetry coefficient (that measures the asymmetry of the likelihood of the transitions between symbols) we show that ordinal analysis applied to the raw data distinguishes the two brain states. In both datasets, we find that the EO state is characterized by higher entropies and lower asymmetry coefficient, as compared to the EC state. Our results thus show that these diagnostic tools have the potential for detecting and characterizing changes in time-evolving brain states.
Fluctuation scaling has been observed universally in a wide variety of phenomena. In time series that describe sequences of events, fluctuation scaling is expressed as power function relationships between the mean and variance of either inter-event intervals or counting statistics, depending on measurement variables. In this article, fluctuation scaling has been formulated for a series of events in which scaling laws in the inter-event intervals and counting statistics were related. We have considered the first-passage time of an Ornstein-Uhlenbeck process and used a conductance-based neuron model with excitatory and inhibitory synaptic inputs to demonstrate the emergence of fluctuation scaling with various exponents, depending on the input regimes and the ratio between excitation and inhibition. Furthermore, we have discussed the possible implication of these results in the context of neural coding.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا