ﻻ يوجد ملخص باللغة العربية
Spiking Neural Networks (SNNs) are brain-inspired, event-driven machine learning algorithms that have been widely recognized in producing ultra-high-energy-efficient hardware. Among existing SNNs, unsupervised SNNs based on synaptic plasticity, especially Spike-Timing-Dependent Plasticity (STDP), are considered to have great potential in imitating the learning process of the biological brain. Nevertheless, the existing STDP-based SNNs have limitations in constrained learning capability and/or slow learning speed. Most STDP-based SNNs adopted a slow-learning Fully-Connected (FC) architecture and used a sub-optimal vote-based scheme for spike decoding. In this paper, we overcome these limitations with: 1) a design of high-parallelism network architecture, inspired by the Inception module in Artificial Neural Networks (ANNs); 2) use of a Vote-for-All (VFA) decoding layer as a replacement to the standard vote-based spike decoding scheme, to reduce the information loss in spike decoding and, 3) a proposed adaptive repolarization (resetting) mechanism that accelerates SNNs learning by enhancing spiking activities. Our experimental results on two established benchmark datasets (MNIST/EMNIST) show that our network architecture resulted in superior performance compared to the widely used FC architecture and a more advanced Locally-Connected (LC) architecture, and that our SNN achieved competitive results with state-of-the-art unsupervised SNNs (95.64%/80.11% accuracy on the MNIST/EMNISE dataset) while having superior learning efficiency and robustness against hardware damage. Our SNN achieved great classification accuracy with only hundreds of training iterations, and random destruction of large numbers of synapses or neurons only led to negligible performance degradation.
Spiking Neural Network (SNN), as a brain-inspired approach, is attracting attention due to its potential to produce ultra-high-energy-efficient hardware. Competitive learning based on Spike-Timing-Dependent Plasticity (STDP) is a popular method to tr
Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN super
Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors. However, existing SNN error backpropagation (BP) methods lack proper handling of spiking discont
For the gradient computation across the time domain in Spiking Neural Networks (SNNs) training, two different approaches have been independently studied. The first is to compute the gradients with respect to the change in spike activation (activation
Spiking neural networks (SNN) are usually more energy-efficient as compared to Artificial neural networks (ANN), and the way they work has a great similarity with our brain. Back-propagation (BP) has shown its strong power in training ANN in recent y