ترغب بنشر مسار تعليمي؟ اضغط هنا

Two correlation games for a nonlinear network with Hebbian excitatory neurons and anti-Hebbian inhibitory neurons

97   0   0.0 ( 0 )
 نشر من قبل H. Sebastian Seung
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper derives the network from two normative principles that are mathematically equivalent but conceptually different. The first principle formulates unsupervised learning as a constrained optimization problem: maximization of S-E correlations subject to a copositivity constraint on E-E correlations. A combination of Legendre and Lagrangian duality yields a zero-sum continuous game between excitatory and inhibitory connections that is solved by the neural network. The second principle defines a zero-sum game between E and I cells. E cells want to maximize S-E correlations and minimize E-I correlations, while I cells want to maximize I-E correlations and minimize power. The conflict between I and E objectives effectively forces the E cells to decorrelate from each other, although only incompletely. Legendre duality yields the neural network.



قيم البحث

اقرأ أيضاً

Collective oscillations and their suppression by external stimulation are analyzed in a large-scale neural network consisting of two interacting populations of excitatory and inhibitory quadratic integrate-and-fire neurons. In the limit of an infinit e number of neurons, the microscopic model of this network can be reduced to an exact low-dimensional system of mean-field equations. Bifurcation analysis of these equations reveals three different dynamic modes in a free network: a stable resting state, a stable limit cycle, and bistability with a coexisting resting state and a limit cycle. We show that in the limit cycle mode, high-frequency stimulation of an inhibitory population can stabilize an unstable resting state and effectively suppress collective oscillations. We also show that in the bistable mode, the dynamics of the network can be switched from a stable limit cycle to a stable resting state by applying an inhibitory pulse to the excitatory population. The results obtained from the mean-field equations are confirmed by numerical simulation of the microscopic model.
Spiking neural networks (SNNs) transmit information through discrete spikes, which performs well in processing spatial-temporal information. Due to the non-differentiable characteristic, there still exist difficulties in designing well-performed SNNs . Recently, SNNs trained with backpropagation have shown superior performance due to the proposal of the gradient approximation. However, the performance on complex tasks is still far away from the deep neural networks. Taking inspiration from the autapse in the brain which connects the spiking neurons with a self-feedback connection, we apply an adaptive time-delayed self-feedback on the membrane potential to regulate the spike precisions. As well as, we apply the balanced excitatory and inhibitory neurons mechanism to control the spiking neurons output dynamically. With the combination of the two mechanisms, we propose a deep spiking neural network with adaptive self-feedback and balanced excitatory and inhibitory neurons (BackEISNN). The experimental results on several standard datasets have shown that the two modules not only accelerate the convergence of the network but also improve the accuracy. For the MNIST, FashionMNIST, and N-MNIST datasets, our model has achieved state-of-the-art performance. For the CIFAR10 dataset, our BackEISNN also gets remarkable performance on a relatively light structure that competes against state-of-the-art SNNs.
Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized, nonlinear Hebbian learning rules. We study the computations implemented by their d ynamics in the simple setting of a neuron receiving feedforward inputs. We show that these nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher-order input correlations. The particular input correlation decomposed, and the form of the decomposition, depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns tensor eigenvectors of higher-order input correlations. We prove that each tensor eigenvector is an attractor and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules, and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at tensor eigenvectors of its higher-order input correlations. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.
The brain is characterized by a strong heterogeneity of inhibitory neurons. We report that spiking neural networks display a resonance to the heterogeneity of inhibitory neurons, with optimal input/output responsiveness occurring for levels of hetero geneity similar to that found experimentally in cerebral cortex. A heterogeneous mean-field model predicts such optimal responsiveness. Moreover, we show that new dynamical regimes emerge from heterogeneity that were not present in the equivalent homogeneous system, such as sparsely synchronous collective oscillations.
Neurons in the intact brain receive a continuous and irregular synaptic bombardment from excitatory and inhibitory pre-synaptic neurons, which determines the firing activity of the stimulated neuron. In order to investigate the influence of inhibitor y stimulation on the firing time statistics, we consider Leaky Integrate-and-Fire neurons subject to inhibitory instantaneous post-synaptic potentials. In particular, we report exact results for the firing rate, the coefficient of variation and the spike train spectrum for various synaptic weight distributions. Our results are not limited to stimulations of infinitesimal amplitude, but they apply as well to finite amplitude post-synaptic potentials, thus being able to capture the effect of rare and large spikes. The developed methods are able to reproduce also the average firing properties of heterogeneous neuronal populations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا