ترغب بنشر مسار تعليمي؟ اضغط هنا

Recurrent Feedback Improves Feedforward Representations in Deep Neural Networks

117   0   0.0 ( 0 )
 نشر من قبل Tai Sing Lee
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

The abundant recurrent horizontal and feedback connections in the primate visual cortex are thought to play an important role in bringing global and semantic contextual information to early visual areas during perceptual inference, helping to resolve local ambiguity and fill in missing details. In this study, we find that introducing feedback loops and horizontal recurrent connections to a deep convolution neural network (VGG16) allows the network to become more robust against noise and occlusion during inference, even in the initial feedforward pass. This suggests that recurrent feedback and contextual modulation transform the feedforward representations of the network in a meaningful and interesting way. We study the population codes of neurons in the network, before and after learning with feedback, and find that learning with feedback yielded an increase in discriminability (measured by d-prime) between the different object classes in the population codes of the neurons in the feedforward path, even at the earliest layer that receives feedback. We find that recurrent feedback, by injecting top-down semantic meaning to the population activities, helps the network learn better feedforward paths to robustly map noisy image patches to the latent representations corresponding to important visual concepts of each object class, resulting in greater robustness of the network against noises and occlusion as well as better fine-grained recognition.

قيم البحث

اقرأ أيضاً

We introduce a new structure for memory neural networks, called feedforward sequential memory networks (FSMN), which can learn long-term dependency without using recurrent feedback. The proposed FSMN is a standard feedforward neural networks equipped with learnable sequential memory blocks in the hidden layers. In this work, we have applied FSMN to several language modeling (LM) tasks. Experimental results have shown that the memory blocks in FSMN can learn effective representations of long history. Experiments have shown that FSMN based language models can significantly outperform not only feedforward neural network (FNN) based LMs but also the popular recurrent neural network (RNN) LMs.
Several studies with brain signals suggested that bottom-up and top-down influences are exerted through distinct frequency bands among visual cortical areas. It has been recently shown that theta and gamma rhythms subserve feedforward, whereas the fe edback influence is dominated by the alpha-beta rhythm in primates. A few theoretical models for reproducing these effects have been proposed so far. Here we show that a simple but biophysically plausible two-network motif composed of spiking-neuron models and chemical synapses can exhibit feedforward and feedback influences through distinct frequency bands. Differently from previous studies, this kind of model allows us to study directed influences not only at the population level, by using a proxy for the local field potential, but also at the cellular level, by using the neuronal spiking series.
Recently, we put forwarded a redox molecular hypothesis involving the natural biophysical substrate of visual perception and imagery. Here, we explicitly propose that the feedback and feedforward iterative operation processes can be interpreted in te rms of a homunculus looking at the biophysical picture in our brain during visual imagery. We further propose that the brain can use both picture-like and language-like representation processes. In our interpretation, visualization (imagery) is a special kind of representation i.e., visual imagery requires a peculiar inherent biophysical (picture-like) mechanism. We also conjecture that the evolution of higher levels of complexity made the biophysical picture representation of the external visual world possible by controlled redox and bioluminescent nonlinear (iterative) biochemical reactions in the V1 and V2 areas during visual imagery. Our proposal deals only with the primary level of visual representation (i.e. perceived scene).
117 - Gonc{c}alo Mestre 2021
Given the inner complexity of the human nervous system, insight into the dynamics of brain activity can be gained from understanding smaller and simpler organisms, such as the nematode C. Elegans. The behavioural and structural biology of these organ isms is well-known, making them prime candidates for benchmarking modelling and simulation techniques. In these complex neuronal collections, classical, white-box modelling techniques based on intrinsic structural or behavioural information are either unable to capture the profound nonlinearities of the neuronal response to different stimuli or generate extremely complex models, which are computationally intractable. In this paper we show how the nervous system of C. Elegans can be modelled and simulated with data-driven models using different neural network architectures. Specifically, we target the use of state of the art recurrent neural networks architectures such as LSTMs and GRUs and compare these architectures in terms of their properties and their accuracy as well as the complexity of the resulting models. We show that GRU models with a hidden layer size of 4 units are able to accurately reproduce with high accuracy the systems response to very different stimuli.
This perspective piece came about through the Generative Adversarial Collaboration (GAC) series of workshops organized by the Computational Cognitive Neuroscience (CCN) conference in 2020. We brought together a number of experts from the field of the oretical neuroscience to debate emerging issues in our understanding of how learning is implemented in biological recurrent neural networks. Here, we will give a brief review of the common assumptions about biological learning and the corresponding findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks commonly used in artificial intelligence. We will then outline the key issues discussed in the workshop: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions. Finally, we conclude with recommendations for both theoretical and experimental neuroscientists when designing new studies that could help to bring clarity to these issues.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا