ترغب بنشر مسار تعليمي؟ اضغط هنا

Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity

183   0   0.0 ( 0 )
 نشر من قبل Srdjan Ostojic
 تاريخ النشر 2021
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity, while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.



قيم البحث

اقرأ أيضاً

162 - Sungho Hong 2008
In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.
The activity of a sparse network of leaky integrate-and-fire neurons is carefully revisited with reference to a regime of a bona-fide asynchronous dynamics. The study is preceded by a finite-size scaling analysis, carried out to identify a setup wher e collective synchronization is negligible. The comparison between quenched and annealed networks reveals the emergence of substantial differences when the coupling strength is increased, via a scenario somehow reminiscent of a phase transition. For sufficiently strong synaptic coupling, quenched networks exhibit a highly bursting neural activity, well reproduced by a self-consistent approach, based on the assumption that the input synaptic current is the superposition of independent renewal processes. The distribution of interspike intervals turns out to be relatively long-tailed; a crucial feature required for the self-sustainment of the bursting activity in a regime where neurons operate on average (much) below threshold. A semi-quantitative analogy with Ornstein-Uhlenbeck processes helps validating this interpretation. Finally, an alternative explanation in terms of Poisson processes is offered under the additional assumption of mutual correlations among excitatory and inhibitory spikes.
Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using r ecurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDTs ability to capture autonomous dynamical systems by applying it to synthetic datasets with known dynamics and data from monkey motor cortex during a reaching task well-modeled by RNNs. The NDT models these datasets as well as state-of-the-art recurrent models. Further, its non-recurrence enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines on the monkey reaching dataset. These results suggest that an explicit dynamics model is not necessary to model autonomous neural population dynamics. Code: https://github.com/snel-repo/neural-data-transformers
Periodic neural activity not locked to the stimulus or to motor responses is usually ignored. Here, we present new tools for modeling and quantifying the information transmission based on periodic neural activity that occurs with quasi-random phase r elative to the stimulus. We propose a model to reproduce characteristic features of oscillatory spike trains, such as histograms of inter-spike intervals and phase locking of spikes to an oscillatory influence. The proposed model is based on an inhomogeneous Gamma process governed by a density function that is a product of the usual stimulus-dependent rate and a quasi-periodic function. Further, we present an analysis method generalizing the direct method (Rieke et al, 1999; Brenner et al, 2000) to assess the information content in such data. We demonstrate these tools on recordings from relay cells in the lateral geniculate nucleus of the cat.
We investigate the possibility that narrowband oscillations may emerge from completely asynchronous, independent neural firing. We find that a population of asynchronous neurons may produce narrowband oscillations if each neuron fires quasi-periodica lly, and we deduce bounds on the degree of variability in neural spike-timing which will permit the emergence of such oscillations. These results suggest a novel mechanism of neural rhythmogenesis, and they help to explain recent experimental reports of large-amplitude local field potential oscillations in the absence of neural spike-timing synchrony. Simply put, although synchrony can produce oscillations, oscillations do not always imply the existence of synchrony.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا