Do you want to publish a course? Click here

Sampling-based probabilistic inference emerges from learning in neural circuits with a cost on reliability

101   0   0.0 ( 0 )
 Added by Laurence Aitchison
 Publication date 2018
  fields Biology
and research's language is English




Ask ChatGPT about the research

Neural responses in the cortex change over time both systematically, due to ongoing plasticity and learning, and seemingly randomly, due to various sources of noise and variability. Most previous work considered each of these processes, learning and variability, in isolation -- here we study neural networks exhibiting both and show that their interaction leads to the emergence of powerful computational properties. We trained neural networks on classical unsupervised learning tasks, in which the objective was to represent their inputs in an efficient, easily decodable form, with an additional cost for neural reliability which we derived from basic biophysical considerations. This cost on reliability introduced a tradeoff between energetically cheap but inaccurate representations and energetically costly but accurate ones. Despite the learning tasks being non-probabilistic, the networks solved this tradeoff by developing a probabilistic representation: neural variability represented samples from statistically appropriate posterior distributions that would result from performing probabilistic inference over their inputs. We provide an analytical understanding of this result by revealing a connection between the cost of reliability, and the objective for a state-of-the-art Bayesian inference strategy: variational autoencoders. We show that the same cost leads to the emergence of increasingly accurate probabilistic representations as networks become more complex, from single-layer feed-forward, through multi-layer feed-forward, to recurrent architectures. Our results provide insights into why neural responses in sensory areas show signatures of sampling-based probabilistic representations, and may inform future deep learning algorithms and their implementation in stochastic low-precision computing systems.



rate research

Read More

This paper addresses two main challenges facing systems neuroscience today: understanding the nature and function of a) cortical feedback between sensory areas and b) correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make easily testable predictions for the impact that feedback signals have on early sensory representations. Applying our framework to the well-studied two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task-dependence of neural response correlations, and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes a number of new predictions and, importantly, characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It also demonstrates a normative way to integrate sensory and cognitive components into physiologically testable mathematical models of perceptual decision-making.
The emerging field of optogenetics allows for optical activation or inhibition of neurons and other tissue in the nervous system. In 2005 optogenetic proteins were expressed in the nematode C. elegans for the first time. Since then, C. elegans has served as a powerful platform upon which to conduct optogenetic investigations of synaptic function, circuit dynamics and the neuronal basis of behavior. The C. elegans nervous system, consisting of 302 neurons, whose connectivity and morphology has been mapped completely, drives a rich repertoire of behaviors that are quantifiable by video microscopy. This model organisms compact nervous system, quantifiable behavior, genetic tractability and optical accessibility make it especially amenable to optogenetic interrogation. Channelrhodopsin-2 (ChR2), halorhodopsin (NpHR/Halo) and other common optogenetic proteins have all been expressed in C. elegans. Moreover recent advances leveraging molecular genetics and patterned light illumination have now made it possible to target photoactivation and inhibition to single cells and to do so in worms as they behave freely. Here we describe techniques and methods for optogenetic manipulation in C. elegans. We review recent work using optogenetics and C. elegans for neuroscience investigations at the level of synapses, circuits and behavior.
The Bayesian view of the brain hypothesizes that the brain constructs a generative model of the world, and uses it to make inferences via Bayes rule. Although many types of approximate inference schemes have been proposed for hierarchical Bayesian models of the brain, the questions of how these distinct inference procedures can be realized by hierarchical networks of spiking neurons remains largely unresolved. Based on a previously proposed multi-compartment neuron model in which dendrites perform logarithmic compression, and stochastic spiking winner-take-all (WTA) circuits in which firing probability of each neuron is normalized by activities of other neurons, here we construct Spiking Neural Networks that perform emph{structured} mean-field variational inference and learning, on hierarchical directed probabilistic graphical models with discrete random variables. In these models, we do away with symmetric synaptic weights previously assumed for emph{unstructured} mean-field variational inference by learning both the feedback and feedforward weights separately. The resulting online learning rules take the form of an error-modulated local Spike-Timing-Dependent Plasticity rule. Importantly, we consider two types of WTA circuits in which only one neuron is allowed to fire at a time (hard WTA) or neurons can fire independently (soft WTA), which makes neurons in these circuits operate in regimes of temporal and rate coding respectively. We show how the hard WTA circuits can be used to perform Gibbs sampling whereas the soft WTA circuits can be used to implement a message passing algorithm that computes the marginals approximately. Notably, a simple change in the amount of lateral inhibition realizes switching between the hard and soft WTA spiking regimes. Hence the proposed network provides a unified view of the two previously disparate modes of inference and coding by spiking neurons.
Objective. Modelling is an important way to study the working mechanism of brain. While the characterization and understanding of brain are still inadequate. This study tried to build a model of brain from the perspective of thermodynamics at system level, which brought a new thinking to brain modelling. Approach. Regarding brain regions as systems, voxels as particles, and intensity of signals as energy of particles, the thermodynamic model of brain was built based on canonical ensemble theory. Two pairs of activated regions and two pairs of inactivated brain regions were selected for comparison in this study, and the analysis on thermodynamic properties based on the model proposed were performed. In addition, the thermodynamic properties were also extracted as input features for the detection of Alzheimers disease. Main results. The experiment results verified the assumption that the brain also follows the thermodynamic laws. It demonstrated the feasibility and rationality of brain thermodynamic modelling method proposed, indicating that thermodynamic parameters could be applied to describe the state of neural system. Meanwhile, the brain thermodynamic model achieved much better accuracy in detection of Alzheimers disease, suggesting the potential application of thermodynamic model in auxiliary diagnosis. Significance. (1) Instead of applying some thermodynamic parameters to analyze neural system, a brain model at system level was proposed from perspective of thermodynamics for the first time in this study. (2) The study discovered that the neural system also follows the laws of thermodynamics, which leads to increased internal energy, increased free energy and decreased entropy when system is activated. (3) The detection of neural disease was demonstrated to be benefit from thermodynamic model, implying the immense potential of thermodynamics in auxiliary diagnosis.
Vision research has been shaped by the seminal insight that we can understand the higher-tier visual cortex from the perspective of multiple functional pathways with different goals. In this paper, we try to give a computational account of the functional organization of this system by reasoning from the perspective of multi-task deep neural networks. Machine learning has shown that tasks become easier to solve when they are decomposed into subtasks with their own cost function. We hypothesize that the visual system optimizes multiple cost functions of unrelated tasks and this causes the emergence of a ventral pathway dedicated to vision for perception, and a dorsal pathway dedicated to vision for action. To evaluate the functional organization in multi-task deep neural networks, we propose a method that measures the contribution of a unit towards each task, applying it to two networks that have been trained on either two related or two unrelated tasks, using an identical stimulus set. Results show that the network trained on the unrelated tasks shows a decreasing degree of feature representation sharing towards higher-tier layers while the network trained on related tasks uniformly shows high degree of sharing. We conjecture that the method we propose can be used to analyze the anatomical and functional organization of the visual system and beyond. We predict that the degree to which tasks are related is a good descriptor of the degree to which they share downstream cortical-units.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا