ﻻ يوجد ملخص باللغة العربية
Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to catastrophic forgetting: they rapidly forget the previous task when trained on a new one. Neuroscience suggests that biological synapses avoid this issue through the process of synaptic consolidation and metaplasticity: the plasticity itself changes upon repeated synaptic events. In this work, we show that this concept of metaplasticity can be transferred to a particular type of deep neural networks, binarized neural networks, to reduce catastrophic forgetting.
While deep neural networks have surpassed human performance in multiple situations, they are prone to catastrophic forgetting: upon training a new task, they rapidly forget previously learned ones. Neuroscience studies, based on idealized tasks, sugg
The recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a ge
We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time and when computing the parameters gradient at train-time. We conduct two sets of experiments, each based on a different
As an important class of spiking neural networks (SNNs), recurrent spiking neural networks (RSNNs) possess great computational power and have been widely used for processing sequential data like audio and text. However, most RSNNs suffer from two pro
We present a mathematical model which explains and interprets a novel form of short-term potentiation, which was found to be use-, but not time-dependent, in experiments done on Lymnaea neurons. The high degree of potentiation is explained using a mo