Do you want to publish a course? Click here

Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity

88   0   0.0 ( 0 )
 Added by Gabriel Ocker
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized, nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. We show that these nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher-order input correlations. The particular input correlation decomposed, and the form of the decomposition, depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns tensor eigenvectors of higher-order input correlations. We prove that each tensor eigenvector is an attractor and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules, and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at tensor eigenvectors of its higher-order input correlations. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.



rate research

Read More

96 - H. Sebastian Seung 2018
A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper derives the network from two normative principles that are mathematically equivalent but conceptually different. The first principle formulates unsupervised learning as a constrained optimization problem: maximization of S-E correlations subject to a copositivity constraint on E-E correlations. A combination of Legendre and Lagrangian duality yields a zero-sum continuous game between excitatory and inhibitory connections that is solved by the neural network. The second principle defines a zero-sum game between E and I cells. E cells want to maximize S-E correlations and minimize E-I correlations, while I cells want to maximize I-E correlations and minimize power. The conflict between I and E objectives effectively forces the E cells to decorrelate from each other, although only incompletely. Legendre duality yields the neural network.
We consider the problem of decomposing higher-order moment tensors, i.e., the sum of symmetric outer products of data vectors. Such a decomposition can be used to estimate the means in a Gaussian mixture model and for other applications in machine learning. The $d$th-order empirical moment tensor of a set of $p$ observations of $n$ variables is a symmetric $d$-way tensor. Our goal is to find a low-rank tensor approximation comprising $r ll p$ symmetric outer products. The challenge is that forming the empirical moment tensors costs $O(pn^d)$ operations and $O(n^d)$ storage, which may be prohibitively expensive; additionally, the algorithm to compute the low-rank approximation costs $O(n^d)$ per iteration. Our contribution is avoiding formation of the moment tensor, computing the low-rank tensor approximation of the moment tensor implicitly using $O(pnr)$ operations per iteration and no extra memory. This advance opens the door to more applications of higher-order moments since they can now be efficiently computed. We present numerical evidence of the computational savings and show an example of estimating the means for higher-order moments.
Adaptation plays a pivotal role in the evolution of natural and artificial complex systems, and in the determination of their functionality. Here, we investigate the impact of adaptive inter-layer processes on intra-layer synchronization in multiplex networks. The considered adaptation mechanism is governed by a Hebbian learning rule, i.e., the link weight between a pair of interconnected nodes is enhanced if the two nodes are in phase. Such adaptive coupling induces an irreversible first-order transition route to synchronization accompanied with a hysteresis. We provide rigorous analytic predictions of the critical coupling strengths for the onset of synchronization and de-synchronization, and verify all our theoretical predictions by means of extensive numerical simulations.
Neural populations exposed to a certain stimulus learn to represent it better. However, the process that leads local, self-organized rules to do so is unclear. We address the question of how can a neural periodic input be learned and use the Differential Hebbian Learning framework, coupled with a homeostatic mechanism to derive two self-consistency equations that lead to increased responses to the same stimulus. Although all our simulations are done with simple Leaky-Integrate and Fire neurons and standard Spiking Time Dependent Plasticity learning rules, our results can be easily interpreted in terms of rates and population codes.
A calculation method for higher-order moments of physical quantities, including magnetization and energy, based on the higher-order tensor renormalization group is proposed. The physical observables are represented by impurity tensors. A systematic summation scheme provides coarse-grained tensors including multiple impurities. Our method is compared with the Monte Carlo method on the two-dimensional Potts model. While the nature of the transition of the $q$-state Potts model has been known for a long time owing to the analytical arguments, a clear numerical confirmation has been difficult due to extremely long correlation length in the weakly first-order transitions, e.g., for $q=5$. A jump of the Binder ratio precisely determines the transition temperature. The finite-size scaling analysis provides critical exponents and distinguishes the weakly first-order and the continuous transitions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا