ترغب بنشر مسار تعليمي؟ اضغط هنا

The implications of perception as probabilistic inference for correlated neural variability during behavior

76   0   0.0 ( 0 )
 نشر من قبل Ralf M Haefner
 تاريخ النشر 2014
  مجال البحث علم الأحياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper addresses two main challenges facing systems neuroscience today: understanding the nature and function of a) cortical feedback between sensory areas and b) correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make easily testable predictions for the impact that feedback signals have on early sensory representations. Applying our framework to the well-studied two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task-dependence of neural response correlations, and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes a number of new predictions and, importantly, characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It also demonstrates a normative way to integrate sensory and cognitive components into physiologically testable mathematical models of perceptual decision-making.



قيم البحث

اقرأ أيضاً

Neural responses in the cortex change over time both systematically, due to ongoing plasticity and learning, and seemingly randomly, due to various sources of noise and variability. Most previous work considered each of these processes, learning and variability, in isolation -- here we study neural networks exhibiting both and show that their interaction leads to the emergence of powerful computational properties. We trained neural networks on classical unsupervised learning tasks, in which the objective was to represent their inputs in an efficient, easily decodable form, with an additional cost for neural reliability which we derived from basic biophysical considerations. This cost on reliability introduced a tradeoff between energetically cheap but inaccurate representations and energetically costly but accurate ones. Despite the learning tasks being non-probabilistic, the networks solved this tradeoff by developing a probabilistic representation: neural variability represented samples from statistically appropriate posterior distributions that would result from performing probabilistic inference over their inputs. We provide an analytical understanding of this result by revealing a connection between the cost of reliability, and the objective for a state-of-the-art Bayesian inference strategy: variational autoencoders. We show that the same cost leads to the emergence of increasingly accurate probabilistic representations as networks become more complex, from single-layer feed-forward, through multi-layer feed-forward, to recurrent architectures. Our results provide insights into why neural responses in sensory areas show signatures of sampling-based probabilistic representations, and may inform future deep learning algorithms and their implementation in stochastic low-precision computing systems.
The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of coll ective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simulatenously using advanced experimental techniques with single-spike resolution, and to relate these correlations to function and behaviour. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations.
Neural noise sets a limit to information transmission in sensory systems. In several areas, the spiking response (to a repeated stimulus) has shown a higher degree of regularity than predicted by a Poisson process. However, a simple model to explain this low variability is still lacking. Here we introduce a new model, with a correction to Poisson statistics, which can accurately predict the regularity of neural spike trains in response to a repeated stimulus. The model has only two parameters, but can reproduce the observed variability in retinal recordings in various conditions. We show analytically why this approximation can work. In a model of the spike emitting process where a refractory period is assumed, we derive that our simple correction can well approximate the spike train statistics over a broad range of firing rates. Our model can be easily plugged to stimulus processing models, like Linear-nonlinear model or its generalizations, to replace the Poisson spike train hypothesis that is commonly assumed. It estimates the amount of information transmitted much more accurately than Poisson models in retinal recordings. Thanks to its simplicity this model has the potential to explain low variability in other areas.
The models in statistical physics such as an Ising model offer a convenient way to characterize stationary activity of neural populations. Such stationary activity of neurons may be expected for recordings from in vitro slices or anesthetized animals . However, modeling activity of cortical circuitries of awake animals has been more challenging because both spike-rates and interactions can change according to sensory stimulation, behavior, or an internal state of the brain. Previous approaches modeling the dynamics of neural interactions suffer from computational cost; therefore, its application was limited to only a dozen neurons. Here by introducing multiple analytic approximation methods to a state-space model of neural population activity, we make it possible to estimate dynamic pairwise interactions of up to 60 neurons. More specifically, we applied the pseudolikelihood approximation to the state-space model, and combined it with the Bethe or TAP mean-field approximation to make the sequential Bayesian estimation of the model parameters possible. The large-scale analysis allows us to investigate dynamics of macroscopic properties of neural circuitries underlying stimulus processing and behavior. We show that the model accurately estimates dynamics of network properties such as sparseness, entropy, and heat capacity by simulated data, and demonstrate utilities of these measures by analyzing activity of monkey V4 neurons as well as a simulated balanced network of spiking neurons.
We consider a sparse random network of excitatory leaky integrate-and-fire neurons with short-term synaptic depression. Furthermore to mimic the dynamics of a brain circuit in its first stages of development we introduce for each neuron correlations among in-degree and out-degree as well as among excitability and the corresponding total degree, We analyze the influence of single neuron stimulation and deletion on the collective dynamics of the network. We show the existence of a small group of neurons capable of controlling and even silencing the bursting activity of the network. These neurons form a functional clique since only their activation in a precise order and within specific time windows is capable to ignite population bursts.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا