ترغب بنشر مسار تعليمي؟ اضغط هنا

Patterns of interval correlations in neural oscillators with adaptation

468   0   0.0 ( 0 )
 نشر من قبل Tilo Schwalger
 تاريخ النشر 2013
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural firing is often subject to negative feedback by adaptation currents. These currents can induce strong correlations among the time intervals between spikes. Here we study analytically the interval correlations of a broad class of noisy neural oscillators with spike-triggered adaptation of arbitrary strength and time scale. Our weak-noise theory provides a general relation between the correlations and the phase-response curve (PRC) of the oscillator, proves anti-correlations between neighboring intervals for adapting neurons with type I PRC and identifies a single order parameter that determines the qualitative pattern of correlations. Monotonically decaying or oscillating correlation structures can be related to qualitatively different voltage traces after spiking, which can be explained by the phase plane geometry. At high firing rates, the long-term variability of the spike train associated with the cumulative interval correlations becomes small, independent of model details. Our results are verified by comparison with stochastic simulations of the exponential, leaky, and generalized integrate-and-fire models with adaptation.



قيم البحث

اقرأ أيضاً

Large-scale recordings of neuronal activity make it possible to gain insights into the collective activity of neural ensembles. It has been hypothesized that neural populations might be optimized to operate at a thermodynamic critical point, and that this property has implications for information processing. Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells. What are the underlying mechanisms that give rise to these observations? Here we show that signatures of criticality arise even in simple feed-forward models of retinal population activity. In particular, they occur whenever neural population data exhibits correlations, and is randomly sub-sampled during data analysis. These results show that signatures of criticality are not necessarily indicative of an optimized coding strategy, and challenge the utility of analysis approaches based on equilibrium thermodynamics for understanding partially observed biological systems.
We study the storage of multiple phase-coded patterns as stable dynamical attractors in recurrent neural networks with sparse connectivity. To determine the synaptic strength of existent connections and store the phase-coded patterns, we introduce a learning rule inspired to the spike-timing dependent plasticity (STDP). We find that, after learning, the spontaneous dynamics of the network replay one of the stored dynamical patterns, depending on the network initialization. We study the network capacity as a function of topology, and find that a small- world-like topology may be optimal, as a compromise between the high wiring cost of long range connections and the capacity increase.
43 - David Cox 2016
This publication serves as an overview of clique topology -- a novel matrix analysis technique used to extract structural features from neural activity data that contains hidden nonlinearities. We highlight work done by Gusti et al. which introduces clique topology and verifies its applicability to neural feature extraction by showing that neural correlations in the rat hippocampus are determined by geometric structure of hippocampal circuits, rather than being a consequence of positional coding.
We model spontaneous cortical activity with a network of coupled spiking units, in which multiple spatio-temporal patterns are stored as dynamical attractors. We introduce an order parameter, which measures the overlap (similarity) between the activi ty of the network and the stored patterns. We find that, depending on the excitability of the network, different working regimes are possible. For high excitability, the dynamical attractors are stable, and a collective activity that replays one of the stored patterns emerges spontaneously, while for low excitability, no replay is induced. Between these two regimes, there is a critical region in which the dynamical attractors are unstable, and intermittent short replays are induced by noise. At the critical spiking threshold, the order parameter goes from zero to one, and its fluctuations are maximized, as expected for a phase transition (and as observed in recent experimental results in the brain). Notably, in this critical region, the avalanche size and duration distributions follow power laws. Critical exponents are consistent with a scaling relationship observed recently in neural avalanches measurements. In conclusion, our simple model suggests that avalanche power laws in cortical spontaneous activity may be the effect of a network at the critical point between the replay and non-replay of spatio-temporal patterns.
Accumulating evidences show that the cerebral cortex is operating near a critical state featured by power-law size distribution of neural avalanche activities, yet evidence of this critical state in artificial neural networks mimicking the cerebral c ortex is lacking. Here we design an artificial neural network of coupled phase oscillators and, by the technique of reservoir computing in machine learning, train it for predicting chaos. It is found that when the machine is properly trained, oscillators in the reservoir are synchronized into clusters whose sizes follow a power-law distribution. This feature, however, is absent when the machine is poorly trained. Additionally, it is found that despite the synchronization degree of the original network, once properly trained, the reservoir network is always developed to the same critical state, exemplifying the attractor nature of this state in machine learning. The generality of the results is verified in different reservoir models and by different target systems, and it is found that the scaling exponent of the distribution is independent on the reservoir details and the bifurcation parameter of the target system, but is modified when the dynamics of the target system is changed to a different type. The findings shed lights on the nature of machine learning, and are helpful to the design of high-performance machine in physical systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا