The advent of large-scale and high-density extracellular recording devices allows simultaneous recording from thousands of neurons. However, the complexity and size of the data makes it mandatory to develop robust algorithms for fully automated spike sorting. Here it is shown that limitations imposed by biological constraints such as changes in spike waveforms induced under different drug regimes should be carefully taken into consideration in future developments.
Developing electrophysiological recordings of brain neuronal activity and their analysis provide a basis for exploring the structure of brain function and nervous system investigation. The recorded signals are typically a combination of spikes and no
ise. High amounts of background noise and possibility of electric signaling recording from several neurons adjacent to the recording site have led scientists to develop neuronal signal processing tools such as spike sorting to facilitate brain data analysis. Spike sorting plays a pivotal role in understanding the electrophysiological activity of neuronal networks. This process prepares recorded data for interpretations of neurons interactions and understanding the overall structure of brain functions. Spike sorting consists of three steps: spike detection, feature extraction, and spike clustering. There are several methods to implement each of spike sorting steps. This paper provides a systematic comparison of various spike sorting sub-techniques applied to real extracellularly recorded data from a rat brain basolateral amygdala. An efficient sorted data resulted from careful choice of spike sorting sub-methods leads to better interpretation of the brain structures connectivity under different conditions, which is a very sensitive concept in diagnosis and treatment of neurological disorders. Here, spike detection is performed by appropriate choice of threshold level via three different approaches. Feature extraction is done through PCA and Kernel PCA methods, which Kernel PCA outperforms. We have applied four different algorithms for spike clustering including K-means, Fuzzy C-means, Bayesian and Fuzzy maximum likelihood estimation. As one requirement of most clustering algorithms, optimal number of clusters is achieved through validity indices for each method. Finally, the sorting results are evaluated using inter-spike interval histograms.
Congenital cognitive dysfunctions are frequently due to deficits in molecular pathways that underlie synaptic plasticity. For example, Rubinstein-Taybi syndrome (RTS) is due to a mutation in cbp, encoding the histone acetyltransferase CREB-binding pr
otein (CBP). CBP is a transcriptional co-activator for CREB, and induction of CREB-dependent transcription plays a key role in long-term memory (LTM). In animal models of RTS, mutations of cbp impair LTM and late-phase long-term potentiation (LTP). To explore intervention strategies to rescue the deficits in LTP, we extended a previous model of LTP induction to describe histone acetylation and simulated LTP impairment due to cbp mutation. Plausible drug effects were simulated by parameter changes, and many increased LTP. However no parameter variation consistent with a biochemical effect of a known drug fully restored LTP. Thus we examined paired parameter variations. A pair that simulated the effects of a phosphodiesterase inhibitor (slowing cAMP degradation) concurrent with a deacetylase inhibitor (prolonging histone acetylation) restored LTP. Importantly these paired parameter changes did not alter basal synaptic weight. A pair that simulated a phosphodiesterase inhibitor and an acetyltransferase activator was similarly effective. For both pairs strong additive synergism was present. These results suggest that promoting histone acetylation while simultaneously slowing the degradation of cAMP may constitute a promising strategy for restoring deficits in LTP that may be associated with learning deficits in RTS. More generally these results illustrate the strategy of combining modeling and empirical studies may help design effective therapies for improving long-term synaptic plasticity and learning in cognitive disorders.
In artificial neural networks trained with gradient descent, the weights used for processing stimuli are also used during backward passes to calculate gradients. For the real brain to approximate gradients, gradient information would have to be propa
gated separately, such that one set of synaptic weights is used for processing and another set is used for backward passes. This produces the so-called weight transport problem for biological models of learning, where the backward weights used to calculate gradients need to mirror the forward weights used to process stimuli. This weight transport problem has been considered so hard that popular proposals for biological learning assume that the backward weights are simply random, as in the feedback alignment algorithm. However, such random weights do not appear to work well for large networks. Here we show how the discontinuity introduced in a spiking system can lead to a solution to this problem. The resulting algorithm is a special case of an estimator used for causal inference in econometrics, regression discontinuity design. We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC. Our results demonstrate that a simple learning rule in a spiking network can allow neurons to produce the right backward connections and thus solve the weight transport problem.
Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring
a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike trains structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain
this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, which can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters, but can reproduce the observed variability in retinal recordings in various conditions. We show analytically why this approximation can work. In a model of the spike emitting process where a refractory period is assumed, we derive that our simple correction can well approximate the spike train statistics over a broad range of firing rates. Our model can be easily plugged to stimulus processing models, like Linear-nonlinear model or its generalizations, to replace the Poisson spike train hypothesis that is commonly assumed. It estimates the amount of information transmitted much more accurately than Poisson models in retinal recordings. Thanks to its simplicity this model has the potential to explain low variability in other areas.
Gerrit Hilgen (Biosciences Institute
,Faculty of Medical Sciences
,n Newcastle University
.
(2019)
.
"Challenges for automated spike sorting: beware of pharmacological manipulations"
.
Gerrit Hilgen
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا