ترغب بنشر مسار تعليمي؟ اضغط هنا

Biologically-inspired Salience Affected Artificial Neural Network (SANN)

194   0   0.0 ( 0 )
 نشر من قبل Leendert Remmelzwaal
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we introduce a novel Salience Affected Artificial Neural Network (SANN) that models the way neuromodulators such as dopamine and noradrenaline affect neural dynamics in the human brain by being distributed diffusely through neocortical regions, allowing both salience signals to modulate cognition immediately, and one time learning to take place through strengthening entire patterns of activation at one go. We present a model that is capable of one-time salience tagging in a neural network trained to classify objects, and returns a salience response during classification (inference). We explore the effects of salience on learning via its effect on the activation functions of each node, as well as on the strength of weights between nodes in the network. We demonstrate that salience tagging can improve classification confidence for both the individual image as well as the class of images it belongs to. We also show that the computation impact of producing a salience response is minimal. This research serves as a proof of concept, and could be the first step towards introducing salience tagging into Deep Learning Networks and robotics.



قيم البحث

اقرأ أيضاً

Sensory predictions by the brain in all modalities take place as a result of bottom-up and top-down connections both in the neocortex and between the neocortex and the thalamus. The bottom-up connections in the cortex are responsible for learning, pa ttern recognition, and object classification, and have been widely modelled using artificial neural networks (ANNs). Here, we present a neural network architecture modelled on the top-down corticothalamic connections and the behaviour of the thalamus: a corticothalamic neural network (CTNN), consisting of an auto-encoder connected to a difference engine with a threshold. We demonstrate that the CTNN is input agnostic, multi-modal, robust during partial occlusion of one or more sensory inputs, and has significantly higher processing efficiency than other predictive coding models, proportional to the number of sequentially similar inputs in a sequence. This increased efficiency could be highly significant in more complex implementations of this architecture, where the predictive nature of the cortex will allow most of the incoming data to be discarded.
Artificial neural networks have diverged far from their early inspiration in neurology. In spite of their technological and commercial success, they have several shortcomings, most notably the need for a large number of training examples and the resu lting computation resources required for iterative learning. Here we describe an approach to neurological network simulation, both architectural and algorithmic, that adheres more closely to established biological principles and overcomes some of the shortcomings of conventional networks.
Spiking neural networks (SNNs) has attracted much attention due to its great potential of modeling time-dependent signals. The firing rate of spiking neurons is decided by control rate which is fixed manually in advance, and thus, whether the firing rate is adequate for modeling actual time series relies on fortune. Though it is demanded to have an adaptive control rate, it is a non-trivial task because the control rate and the connection weights learned during the training process are usually entangled. In this paper, we show that the firing rate is related to the eigenvalue of the spike generation function. Inspired by this insight, by enabling the spike generation function to have adaptable eigenvalues rather than parametric control rates, we develop the Bifurcation Spiking Neural Network (BSNN), which has an adaptive firing rate and is insensitive to the setting of control rates. Experiments validate the effectiveness of BSNN on a broad range of tasks, showing that BSNN achieves superior performance to existing SNNs and is robust to the setting of control rates.
A developmental disorder that severely damages communicative and social functions, the Autism Spectrum Disorder (ASD) also presents aspects related to mental rigidity, repetitive behavior, and difficulty in abstract reasoning. More, imbalances betwee n excitatory and inhibitory brain states, in addition to cortical connectivity disruptions, are at the source of the autistic behavior. Our main goal consists in unveiling the way by which these local excitatory imbalances and/or long brain connections disruptions are linked to the above mentioned cognitive features. We developed a theoretical model based on Self-Organizing Maps (SOM), where a three-level artificial neural network qualitatively incorporates these kinds of alterations observed in brains of patients with ASD. Computational simulations of our model indicate that high excitatory states or long distance under-connectivity are at the origins of cognitive alterations, as difficulty in categorization and mental rigidity. More specifically, the enlargement of excitatory synaptic reach areas in a cortical map development conducts to low categorization (over-selectivity) and poor concepts formation. And, both the over-strengthening of local excitatory synapses and the long distance under-connectivity, although through distinct mechanisms, contribute to impaired categorization (under-selectivity) and mental rigidity. Our results indicate how, together, both local and global brain connectivity alterations give rise to spoiled cortical structures in distinct ways and in distinct cortical areas. These alterations would disrupt the codification of sensory stimuli, the representation of concepts and, thus, the process of categorization - by this way imposing serious limits to the mental flexibility and to the capacity of generalization in the autistic reasoning.
Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale- freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (ErdH{o}s-Renyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا