ترغب بنشر مسار تعليمي؟ اضغط هنا

Neurally Implementable Semantic Networks

56   0   0.0 ( 0 )
 نشر من قبل Garrett Evans
 تاريخ النشر 2013
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose general principles for semantic networks allowing them to be implemented as dynamical neural networks. Major features of our scheme include: (a) the interpretation that each node in a network stands for a bound integration of the meanings of all nodes and external events the node links with; (b) the systematic use of nodes that stand for categories or types, with separate nodes for instances of these types; (c) an implementation of relationships that does not use intrinsically typed links between nodes.

قيم البحث

اقرأ أيضاً

A popular theory of perceptual processing holds that the brain learns both a generative model of the world and a paired recognition model using variational Bayesian inference. Most hypotheses of how the brain might learn these models assume that neur ons in a population are conditionally independent given their common inputs. This simplification is likely not compatible with the type of local recurrence observed in the brain. Seeking an alternative that is compatible with complex inter-dependencies yet consistent with known biology, we argue here that the cortex may learn with an adversarial algorithm. Many observable symptoms of this approach would resemble known neural phenomena, including wake/sleep cycles and oscillations that vary in magnitude with surprise, and we describe how further predictions could be tested. We illustrate the idea on recurrent neural networks trained to model image and video datasets. This framework for learning brings variational inference closer to neuroscience and yields multiple testable hypotheses.
Blind source separation, i.e. extraction of independent sources from a mixture, is an important problem for both artificial and natural signal processing. Here, we address a special case of this problem when sources (but not the mixing matrix) are kn own to be nonnegative, for example, due to the physical nature of the sources. We search for the solution to this problem that can be implemented using biologically plausible neural networks. Specifically, we consider the online setting where the dataset is streamed to a neural network. The novelty of our approach is that we formulate blind nonnegative source separation as a similarity matching problem and derive neural networks from the similarity matching objective. Importantly, synaptic weights in our networks are updated according to biologically plausible local learning rules.
Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold---the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics---often appears universal across all architectures.
Feedforward networks (FFN) are ubiquitous structures in neural systems and have been studied to understand mechanisms of reliable signal and information transmission. In many FFNs, neurons in one layer have intrinsic properties that are distinct from those in their pre-/postsynaptic layers, but how this affects network-level information processing remains unexplored. Here we show that layer-to-layer heterogeneity arising from lamina-specific cellular properties facilitates signal and information transmission in FFNs. Specifically, we found that signal transformations, made by each layer of neurons on an input-driven spike signal, demodulate signal distortions introduced by preceding layers. This mechanism boosts information transfer carried by a propagating spike signal and thereby supports reliable spike signal and information transmission in a deep FFN. Our study suggests that distinct cell types in neural circuits, performing different computational functions, facilitate information processing on the whole.
96 - H. Sebastian Seung 2018
A companion paper introduces a nonlinear network with Hebbian excitatory (E) neurons that are reciprocally coupled with anti-Hebbian inhibitory (I) neurons and also receive Hebbian feedforward excitation from sensory (S) afferents. The present paper derives the network from two normative principles that are mathematically equivalent but conceptually different. The first principle formulates unsupervised learning as a constrained optimization problem: maximization of S-E correlations subject to a copositivity constraint on E-E correlations. A combination of Legendre and Lagrangian duality yields a zero-sum continuous game between excitatory and inhibitory connections that is solved by the neural network. The second principle defines a zero-sum game between E and I cells. E cells want to maximize S-E correlations and minimize E-I correlations, while I cells want to maximize I-E correlations and minimize power. The conflict between I and E objectives effectively forces the E cells to decorrelate from each other, although only incompletely. Legendre duality yields the neural network.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا