Do you want to publish a course? Click here

Training Binary Multilayer Neural Networks for Image Classification using Expectation Backpropagation

142   0   0.0 ( 0 )
 Added by Zhiyong Cheng
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Compared to Multilayer Neural Networks with real weights, Binary Multilayer Neural Networks (BMNNs) can be implemented more efficiently on dedicated hardware. BMNNs have been demonstrated to be effective on binary classification tasks with Expectation BackPropagation (EBP) algorithm on high dimensional text datasets. In this paper, we investigate the capability of BMNNs using the EBP algorithm on multiclass image classification tasks. The performances of binary neural networks with multiple hidden layers and different numbers of hidden units are examined on MNIST. We also explore the effectiveness of image spatial filters and the dropout technique in BMNNs. Experimental results on MNIST dataset show that EBP can obtain 2.12% test error with binary weights and 1.66% test error with real weights, which is comparable to the results of standard BackPropagation algorithm on fully connected MNNs.



rate research

Read More

Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation. However, training such networks is difficult due to the non-differentiable nature of asynchronous spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are only considered as noise. This enables an error backpropagation mechanism for deep SNNs, which works directly on spike signals and membrane potentials. Thus, compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statics of spikes more precisely. Our novel framework outperforms all previously reported results for SNNs on the permutation invariant MNIST benchmark, as well as the N-MNIST benchmark recorded with event-based vision sensors.
99 - Wenrui Zhang , Peng Li 2019
Spiking neural networks (SNNs) well support spatiotemporal learning and energy-efficient event-driven hardware neuromorphic processors. As an important class of SNNs, recurrent spiking neural networks (RSNNs) possess great computational power. However, the practical application of RSNNs is severely limited by challenges in training. Biologically-inspired unsupervised learning has limited capability in boosting the performance of RSNNs. On the other hand, existing backpropagation (BP) methods suffer from high complexity of unrolling in time, vanishing and exploding gradients, and approximate differentiation of discontinuous spiking activities when applied to RSNNs. To enable supervised training of RSNNs under a well-defined loss function, we present a novel Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep RSNNs. The proposed ST-RSBP directly computes the gradient of a rated-coded loss function defined at the output layer of the network w.r.t tunable parameters. The scalability of ST-RSBP is achieved by the proposed spike-train level computation during which temporal effects of the SNN is captured in both the forward and backward pass of BP. Our ST-RSBP algorithm can be broadly applied to RSNNs with a single recurrent layer or deep RSNNs with multiple feed-forward and recurrent layers. Based upon challenging speech and image datasets including TI46, N-TIDIGITS, Fashion-MNIST and MNIST, ST-RSBP is able to train RSNNs with an accuracy surpassing that of the current state-of-art SNN BP algorithms and conventional non-spiking deep learning models.
The binary neural network, largely saving the storage and computation, serves as a promising technique for deploying deep models on resource-limited devices. However, the binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network. To address these issues, a variety of algorithms have been proposed, and achieved satisfying progress in recent years. In this paper, we present a comprehensive survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error. We also investigate other practical aspects of binary neural networks such as the hardware-friendly design and the training tricks. Then, we give the evaluation and discussions on different tasks, including image classification, object detection and semantic segmentation. Finally, the challenges that may be faced in future research are prospected.
Computation using brain-inspired spiking neural networks (SNNs) with neuromorphic hardware may offer orders of magnitude higher energy efficiency compared to the current analog neural networks (ANNs). Unfortunately, training SNNs with the same number of layers as state of the art ANNs remains a challenge. To our knowledge the only method which is successful in this regard is supervised training of ANN and then converting it to SNN. In this work we directly train deep SNNs using backpropagation with surrogate gradient and find that due to implicitly recurrent nature of feed forward SNNs the exploding or vanishing gradient problem severely hinders their training. We show that this problem can be solved by tuning the surrogate gradient function. We also propose using batch normalization from ANN literature on input currents of SNN neurons. Using these improvements we show that is is possible to train SNN with ResNet50 architecture on CIFAR100 and Imagenette object recognition datasets. The trained SNN falls behind in accuracy compared to analogous ANN but requires several orders of magnitude less inference time steps (as low as 10) to reach good accuracy compared to SNNs obtained by conversion from ANN which require on the order of 1000 time steps.
We seek to investigate the scalability of neuromorphic computing for computer vision, with the objective of replicating non-neuromorphic performance on computer vision tasks while reducing power consumption. We convert the deep Artificial Neural Network (ANN) architecture U-Net to a Spiking Neural Network (SNN) architecture using the Nengo framework. Both rate-based and spike-based models are trained and optimized for benchmarking performance and power, using a modified version of the ISBI 2D EM Segmentation dataset consisting of microscope images of cells. We propose a partitioning method to optimize inter-chip communication to improve speed and energy efficiency when deploying multi-chip networks on the Loihi neuromorphic chip. We explore the advantages of regularizing firing rates of Loihi neurons for converting ANN to SNN with minimum accuracy loss and optimized energy consumption. We propose a percentile based regularization loss function to limit the spiking rate of the neuron between a desired range. The SNN is converted directly from the corresponding ANN, and demonstrates similar semantic segmentation as the ANN using the same number of neurons and weights. However, the neuromorphic implementation on the Intel Loihi neuromorphic chip is over 2x more energy-efficient than conventional hardware (CPU, GPU) when running online (one image at a time). These power improvements are achieved without sacrificing the task performance accuracy of the network, and when all weights (Loihi, CPU, and GPU networks) are quantized to 8 bits.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا