ﻻ يوجد ملخص باللغة العربية
The Bayesian view of the brain hypothesizes that the brain constructs a generative model of the world, and uses it to make inferences via Bayes rule. Although many types of approximate inference schemes have been proposed for hierarchical Bayesian models of the brain, the questions of how these distinct inference procedures can be realized by hierarchical networks of spiking neurons remains largely unresolved. Based on a previously proposed multi-compartment neuron model in which dendrites perform logarithmic compression, and stochastic spiking winner-take-all (WTA) circuits in which firing probability of each neuron is normalized by activities of other neurons, here we construct Spiking Neural Networks that perform emph{structured} mean-field variational inference and learning, on hierarchical directed probabilistic graphical models with discrete random variables. In these models, we do away with symmetric synaptic weights previously assumed for emph{unstructured} mean-field variational inference by learning both the feedback and feedforward weights separately. The resulting online learning rules take the form of an error-modulated local Spike-Timing-Dependent Plasticity rule. Importantly, we consider two types of WTA circuits in which only one neuron is allowed to fire at a time (hard WTA) or neurons can fire independently (soft WTA), which makes neurons in these circuits operate in regimes of temporal and rate coding respectively. We show how the hard WTA circuits can be used to perform Gibbs sampling whereas the soft WTA circuits can be used to implement a message passing algorithm that computes the marginals approximately. Notably, a simple change in the amount of lateral inhibition realizes switching between the hard and soft WTA spiking regimes. Hence the proposed network provides a unified view of the two previously disparate modes of inference and coding by spiking neurons.
In this work we study biological neural networks from an algorithmic perspective, focusing on understanding tradeoffs between computation time and network complexity. Our goal is to abstract real neural networks in a way that, while not capturing all
Winner-Take-All (WTA) refers to the neural operation that selects a (typically small) group of neurons from a large neuron pool. It is conjectured to underlie many of the brains fundamental computational abilities. However, not much is known about th
Networks of spiking neurons and Winner-Take-All spiking circuits (WTA-SNNs) can detect information encoded in spatio-temporal multi-valued events. These are described by the timing of events of interest, e.g., clicks, as well as by categorical numeri
In this paper we present a novel approach to automatically infer parameters of spiking neural networks. Neurons are modelled as timed automata waiting for inputs on a number of different channels (synapses), for a given amount of time (the accumulati
Approximate inference in deep Bayesian networks exhibits a dilemma of how to yield high fidelity posterior approximations while maintaining computational efficiency and scalability. We tackle this challenge by introducing a novel variational structur