ترغب بنشر مسار تعليمي؟ اضغط هنا

Hebbian learning of recurrent connections: a geometrical perspective

213   0   0.0 ( 0 )
 نشر من قبل Mathieu Galtier
 تاريخ النشر 2011
  مجال البحث علم الأحياء فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow/fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix on to a distance matrix in a high-dimensional space, with the neurons labelled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the non-convolutional part of the connectivity which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex. In addition, we establish that the non-convolutional (or long-range) connectivity is patchy, and is co-aligned in the case of orientation learning.



قيم البحث

اقرأ أيضاً

The understanding of neural activity patterns is fundamentally linked to an understanding of how the brains network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of r andom graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs, or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings, lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatiotemporal pattern formation and propose a novel perspective for analyzing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.
96 - H. Sebastian Seung 2018
A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper derives the network from two normative principles that are mathematically equivalent but conceptually different. The first principle formulates unsupervised learning as a constrained optimization problem: maximization of S-E correlations subject to a copositivity constraint on E-E correlations. A combination of Legendre and Lagrangian duality yields a zero-sum continuous game between excitatory and inhibitory connections that is solved by the neural network. The second principle defines a zero-sum game between E and I cells. E cells want to maximize S-E correlations and minimize E-I correlations, while I cells want to maximize I-E correlations and minimize power. The conflict between I and E objectives effectively forces the E cells to decorrelate from each other, although only incompletely. Legendre duality yields the neural network.
Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized, nonlinear Hebbian learning rules. We study the computations implemented by their d ynamics in the simple setting of a neuron receiving feedforward inputs. We show that these nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher-order input correlations. The particular input correlation decomposed, and the form of the decomposition, depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns tensor eigenvectors of higher-order input correlations. We prove that each tensor eigenvector is an attractor and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules, and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at tensor eigenvectors of its higher-order input correlations. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.
A popular theory of perceptual processing holds that the brain learns both a generative model of the world and a paired recognition model using variational Bayesian inference. Most hypotheses of how the brain might learn these models assume that neur ons in a population are conditionally independent given their common inputs. This simplification is likely not compatible with the type of local recurrence observed in the brain. Seeking an alternative that is compatible with complex inter-dependencies yet consistent with known biology, we argue here that the cortex may learn with an adversarial algorithm. Many observable symptoms of this approach would resemble known neural phenomena, including wake/sleep cycles and oscillations that vary in magnitude with surprise, and we describe how further predictions could be tested. We illustrate the idea on recurrent neural networks trained to model image and video datasets. This framework for learning brings variational inference closer to neuroscience and yields multiple testable hypotheses.
In the realm of motor control, artificial agents cannot match the performance of their biological counterparts. We thus explore a neural control architecture that is both biologically plausible, and capable of fully autonomous learning. The architect ure consists of feedback controllers that learn to achieve a desired state by selecting the errors that should drive them. This selection happens through a family of differential Hebbian learning rules that, through interaction with the environment, can learn to control systems where the error responds monotonically to the control signal. We next show that in a more general case, neural reinforcement learning can be coupled with a feedback controller to reduce errors that arise non-monotonically from the control signal. The use of feedback control reduces the complexity of the reinforcement learning problem, because only a desired value must be learned, with the controller handling the details of how it is reached. This makes the function to be learned simpler, potentially allowing to learn more complex actions. We discuss how this approach could be extended to hierarchical architectures.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا