ﻻ يوجد ملخص باللغة العربية
Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized, nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. We show that these nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher-order input correlations. The particular input correlation decomposed, and the form of the decomposition, depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns tensor eigenvectors of higher-order input correlations. We prove that each tensor eigenvector is an attractor and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules, and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at tensor eigenvectors of its higher-order input correlations. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.
A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper
We consider the problem of decomposing higher-order moment tensors, i.e., the sum of symmetric outer products of data vectors. Such a decomposition can be used to estimate the means in a Gaussian mixture model and for other applications in machine le
Adaptation plays a pivotal role in the evolution of natural and artificial complex systems, and in the determination of their functionality. Here, we investigate the impact of adaptive inter-layer processes on intra-layer synchronization in multiplex
Neural populations exposed to a certain stimulus learn to represent it better. However, the process that leads local, self-organized rules to do so is unclear. We address the question of how can a neural periodic input be learned and use the Differen
A calculation method for higher-order moments of physical quantities, including magnetization and energy, based on the higher-order tensor renormalization group is proposed. The physical observables are represented by impurity tensors. A systematic s