Do you want to publish a course? Click here

Single-trial estimation of stimulus and spike-history effects on time-varying ensemble spiking activity of multiple neurons: a simulation study

406   0   0.0 ( 0 )
 Added by Hideaki Shimazaki
 Publication date 2013
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

Neurons in cortical circuits exhibit coordinated spiking activity, and can produce correlated synchronous spikes during behavior and cognition. We recently developed a method for estimating the dynamics of correlated ensemble activity by combining a model of simultaneous neuronal interactions (e.g., a spin-glass model) with a state-space method (Shimazaki et al. 2012 PLoS Comput Biol 8 e1002385). This method allows us to estimate stimulus-evoked dynamics of neuronal interactions which is reproducible in repeated trials under identical experimental conditions. However, the method may not be suitable for detecting stimulus responses if the neuronal dynamics exhibits significant variability across trials. In addition, the previous model does not include effects of past spiking activity of the neurons on the current state of ensemble activity. In this study, we develop a parametric method for simultaneously estimating the stimulus and spike-history effects on the ensemble activity from single-trial data even if the neurons exhibit dynamics that is largely unrelated to these effects. For this goal, we model ensemble neuronal activity as a latent process and include the stimulus and spike-history effects as exogenous inputs to the latent process. We develop an expectation-maximization algorithm that simultaneously achieves estimation of the latent process, stimulus responses, and spike-history effects. The proposed method is useful to analyze an interaction of internal cortical states and sensory evoked activity.



rate research

Read More

Local anaxonic neurons with graded potential release are important ingredients of nervous systems, present in the olfactory bulb system of mammalians, in the human visual system, as well as in arthropods and nematodes. We develop a neuronal network model including both axonic and anaxonic neurons and monitor the activity tuned by the following parameters: The decay length of the graded potential in local neurons, the fraction of local neurons, the largest eigenvalue of the adjacency matrix and the range of connections of the local neurons. Tuning the fraction of local neurons, we derive the phase diagram including two transition lines: A critical line separating subcritical and supercritical regions, characterized by power law distributions of avalanche sizes and durations, and a bifurcation line. We find that the overall behavior of the system is controlled by a parameter tuning the relevance of local neuron transmission with respect to the axonal one. The statistical properties of spontaneous activity are affected by local neurons at large fractions and in the condition that the graded potential transmission dominates the axonal one. In this case the scaling properties of spontaneous activity exhibit continuously varying exponents, rather than the mean field branching model universality class.
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons -- a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
In this study, we analyzed the activity of monkey V1 neurons responding to grating stimuli of different orientations using inference methods for a time-dependent Ising model. The method provides optimal estimation of time-dependent neural interactions with credible intervals according to the sequential Bayes estimation algorithm. Furthermore, it allows us to trace dynamics of macroscopic network properties such as entropy, sparseness, and fluctuation. Here we report that, in all examined stimulus conditions, pairwise interactions contribute to increasing sparseness and fluctuation. We then demonstrate that the orientation of the grating stimulus is in part encoded in the pairwise interactions of the neural populations. These results demonstrate the utility of the state-space Ising model in assessing contributions of neural interactions during stimulus processing.
Developing electrophysiological recordings of brain neuronal activity and their analysis provide a basis for exploring the structure of brain function and nervous system investigation. The recorded signals are typically a combination of spikes and noise. High amounts of background noise and possibility of electric signaling recording from several neurons adjacent to the recording site have led scientists to develop neuronal signal processing tools such as spike sorting to facilitate brain data analysis. Spike sorting plays a pivotal role in understanding the electrophysiological activity of neuronal networks. This process prepares recorded data for interpretations of neurons interactions and understanding the overall structure of brain functions. Spike sorting consists of three steps: spike detection, feature extraction, and spike clustering. There are several methods to implement each of spike sorting steps. This paper provides a systematic comparison of various spike sorting sub-techniques applied to real extracellularly recorded data from a rat brain basolateral amygdala. An efficient sorted data resulted from careful choice of spike sorting sub-methods leads to better interpretation of the brain structures connectivity under different conditions, which is a very sensitive concept in diagnosis and treatment of neurological disorders. Here, spike detection is performed by appropriate choice of threshold level via three different approaches. Feature extraction is done through PCA and Kernel PCA methods, which Kernel PCA outperforms. We have applied four different algorithms for spike clustering including K-means, Fuzzy C-means, Bayesian and Fuzzy maximum likelihood estimation. As one requirement of most clustering algorithms, optimal number of clusters is achieved through validity indices for each method. Finally, the sorting results are evaluated using inter-spike interval histograms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا