No Arabic abstract
Networks of spiking neurons and Winner-Take-All spiking circuits (WTA-SNNs) can detect information encoded in spatio-temporal multi-valued events. These are described by the timing of events of interest, e.g., clicks, as well as by categorical numerical values assigned to each event, e.g., like or dislike. Other use cases include object recognition from data collected by neuromorphic cameras, which produce, for each pixel, signed bits at the times of sufficiently large brightness variations. Existing schemes for training WTA-SNNs are limited to rate-encoding solutions, and are hence able to detect only spatial patterns. Developing more general training algorithms for arbitrary WTA-SNNs inherits the challenges of training (binary) Spiking Neural Networks (SNNs). These amount, most notably, to the non-differentiability of threshold functions, to the recurrent behavior of spiking neural models, and to the difficulty of implementing backpropagation in neuromorphic hardware. In this paper, we develop a variational online local training rule for WTA-SNNs, referred to as VOWEL, that leverages only local pre- and post-synaptic information for visible circuits, and an additional common reward signal for hidden circuits. The method is based on probabilistic generalized linear neural models, control variates, and variational regularization. Experimental results on real-world neuromorphic datasets with multi-valued events demonstrate the advantages of WTA-SNNs over conventional binary SNNs trained with state-of-the-art methods, especially in the presence of limited computing resources.
In this work we study biological neural networks from an algorithmic perspective, focusing on understanding tradeoffs between computation time and network complexity. Our goal is to abstract real neural networks in a way that, while not capturing all interesting features, preserves high-level behavior and allows us to make biologically relevant conclusions. Towards this goal, we consider the implementation of algorithmic primitives in a simple yet biologically plausible model of $stochastic spiking neural networks$. In particular, we show how the stochastic behavior of neurons in this model can be leveraged to solve a basic $symmetry-breaking task$ in which we are given neurons with identical firing rates and want to select a distinguished one. In computational neuroscience, this is known as the winner-take-all (WTA) problem, and it is believed to serve as a basic building block in many tasks, e.g., learning, pattern recognition, and clustering. We provide efficient constructions of WTA circuits in our stochastic spiking neural network model, as well as lower bounds in terms of the number of auxiliary neurons required to drive convergence to WTA in a given number of steps. These lower bounds demonstrate that our constructions are near-optimal in some cases. This work covers and gives more in-depth proofs of a subset of results originally published in [LMP17a]. It is adapted from the last chapter of C. Muscos Ph.D. thesis [Mus18].
The Bayesian view of the brain hypothesizes that the brain constructs a generative model of the world, and uses it to make inferences via Bayes rule. Although many types of approximate inference schemes have been proposed for hierarchical Bayesian models of the brain, the questions of how these distinct inference procedures can be realized by hierarchical networks of spiking neurons remains largely unresolved. Based on a previously proposed multi-compartment neuron model in which dendrites perform logarithmic compression, and stochastic spiking winner-take-all (WTA) circuits in which firing probability of each neuron is normalized by activities of other neurons, here we construct Spiking Neural Networks that perform emph{structured} mean-field variational inference and learning, on hierarchical directed probabilistic graphical models with discrete random variables. In these models, we do away with symmetric synaptic weights previously assumed for emph{unstructured} mean-field variational inference by learning both the feedback and feedforward weights separately. The resulting online learning rules take the form of an error-modulated local Spike-Timing-Dependent Plasticity rule. Importantly, we consider two types of WTA circuits in which only one neuron is allowed to fire at a time (hard WTA) or neurons can fire independently (soft WTA), which makes neurons in these circuits operate in regimes of temporal and rate coding respectively. We show how the hard WTA circuits can be used to perform Gibbs sampling whereas the soft WTA circuits can be used to implement a message passing algorithm that computes the marginals approximately. Notably, a simple change in the amount of lateral inhibition realizes switching between the hard and soft WTA spiking regimes. Hence the proposed network provides a unified view of the two previously disparate modes of inference and coding by spiking neurons.
Spiking Neural Networks (SNNs) offer a promising alternative to conventional Artificial Neural Networks (ANNs) for the implementation of on-device low-power online learning and inference. On-device training is, however, constrained by the limited amount of data available at each device. In this paper, we propose to mitigate this problem via cooperative training through Federated Learning (FL). To this end, we introduce an online FL-based learning rule for networked on-device SNNs, which we refer to as FL-SNN. FL-SNN leverages local feedback signals within each SNN, in lieu of backpropagation, and global feedback through communication via a base station. The scheme demonstrates significant advantages over separate training and features a flexible trade-off between communication load and accuracy via the selective exchange of synaptic weights.
Artificial Neural Network (ANN)-based inference on battery-powered devices can be made more energy-efficient by restricting the synaptic weights to be binary, hence eliminating the need to perform multiplications. An alternative, emerging, approach relies on the use of Spiking Neural Networks (SNNs), biologically inspired, dynamic, event-driven models that enhance energy efficiency via the use of binary, sparse, activations. In this paper, an SNN model is introduced that combines the benefits of temporally sparse binary activations and of binary weights. Two learning rules are derived, the first based on the combination of straight-through and surrogate gradient techniques, and the second based on a Bayesian paradigm. Experiments validate the performance loss with respect to full-precision implementations, and demonstrate the advantage of the Bayesian paradigm in terms of accuracy and calibration.
Inspired by the advances in biological science, the study of sparse binary projection models has attracted considerable recent research attention. The models project dense input samples into a higher-dimensional space and output sparse binary data representations after the Winner-Take-All competition, subject to the constraint that the projection matrix is also sparse and binary. Following the work along this line, we developed a supervised-WTA model when training samples with both input and output representations are available, from which the optimal projection matrix can be obtained with a simple, effective yet efficient algorithm. We further extended the model and the algorithm to an unsupervised setting where only the input representation of the samples is available. In a series of empirical evaluation on similarity search tasks, the proposed models reported significantly improved results over the state-of-the-art methods in both search accuracies and running speed. The successful results give us strong confidence that the work provides a highly practical tool to real world applications.