Do you want to publish a course? Click here

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration

174   0   0.0 ( 0 )
 Added by Zhi-Gang Liu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Exploiting sparsity is a key technique in accelerating quantized convolutional neural network (CNN) inference on mobile devices. Prior sparse CNN accelerators largely exploit un-structured sparsity and achieve significant speedups. Due to the unbounded, largely unpredictable sparsity patterns, however, exploiting unstructured sparsity requires complicated hardware design with significant energy and area overhead, which is particularly detrimental to mobile/IoT inference scenarios where energy and area efficiency are crucial. We propose to exploit structured sparsity, more specifically, Density Bound Block (DBB) sparsity for both weights and activations. DBB block tensors bound the maximum number of non-zeros per block. DBB thus exposes statically predictable sparsity patterns that enable lean sparsity-exploiting hardware. We propose new hardware primitives to implement DBB sparsity for (static) weights and (dynamic) activations, respectively, with very low overheads. Building on top of the primitives, we describe S2TA, a systolic array-based CNN accelerator that exploits joint weight and activation DBB sparsity and new dimensions of data reuse unavailable on the traditional systolic array. S2TA in 16nm achieves more than 2x speedup and energy reduction compared to a strong baseline of a systolic array with zero-value clock gating, over five popular CNN benchmarks. Compared to two recent non-systolic sparse accelerators, Eyeriss v2 (65nm) and SparTen (45nm), S2TA in 65nm uses about 2.2x and 3.1x less energy per inference, respectively.



rate research

Read More

Convolutional neural network (CNN) inference on mobile devices demands efficient hardware acceleration of low-precision (INT8) general matrix multiplication (GEMM). Exploiting data sparsity is a common approach to further accelerate GEMM for CNN inference, and in particular, structural sparsity has the advantages of predictable load balancing and very low index overhead. In this paper, we address a key architectural challenge with structural sparsity: how to provide support for a range of sparsity levels while maintaining high utilization of the hardware. We describe a time unrolled formulation of variable density-bound block (VDBB) sparsity that allows for a configurable number of non-zero elements per block, at constant utilization. We then describe a systolic array microarchitecture that implements this scheme, with two data reuse optimizations. Firstly, we increase reuse in both operands and partial products by increasing the number of MACs per PE. Secondly, we introduce a novel approach of moving the IM2COL transform into the hardware, which allows us to achieve a 3x data bandwidth expansion just before the operands are consumed by the datapath, reducing the SRAM power consumption. The optimizations for weight sparsity, activation sparsity and data reuse are all interrelated and therefore the optimal combination is not obvious. Therefore, we perform an design space evaluation to find the pareto-optimal design characteristics. The resulting design achieves 16.8 TOPS/W in 16nm with modest 50% model sparsity and scales with model sparsity up to 55.7TOPS/W at 87.5%. As well as successfully demonstrating the variable DBB technique, this result significantly outperforms previously reported sparse CNN accelerators.
A non-trivial interplay between quantum coherence and dissipative environment-driven dynamics is becoming increasingly recognised as key for efficient energy transport in photosynthetic pigment-protein complexes, and converting these biologically-inspired insights into a set of design principles that can be implemented in artificial light-harvesting systems has become an active research field. Here we identify a specific design principle - the phonon antenna - that demonstrates how inter-pigment coherence is able to modify and optimize the way that excitations spectrally sample their local environmental fluctuations. We place this principle into a broader context and furthermore we provide evidence that the Fenna-Matthews-Olson complex of green sulphur bacteria has an excitonic structure that is close to such an optimal operating point, and suggest that this general design principle might well be exploited in other biomolecular systems.
Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation is available at https://github.com/CompressTeam/TransformCodingInference
Training Convolutional Neural Networks (CNNs) usually requires a large number of computational resources. In this paper, textit{SparseTrain} is proposed to accelerate CNN training by fully exploiting the sparsity. It mainly involves three levels of innovations: activation gradients pruning algorithm, sparse training dataflow, and accelerator architecture. By applying a stochastic pruning algorithm on each layer, the sparsity of back-propagation gradients can be increased dramatically without degrading training accuracy and convergence rate. Moreover, to utilize both textit{natural sparsity} (resulted from ReLU or Pooling layers) and textit{artificial sparsity} (brought by pruning algorithm), a sparse-aware architecture is proposed for training acceleration. This architecture supports forward and back-propagation of CNN by adopting 1-Dimensional convolution dataflow. We have built %a simple compiler to map CNNs topology onto textit{SparseTrain}, and a cycle-accurate architecture simulator to evaluate the performance and efficiency based on the synthesized design with $14nm$ FinFET technologies. Evaluation results on AlexNet/ResNet show that textit{SparseTrain} could achieve about $2.7 times$ speedup and $2.2 times$ energy efficiency improvement on average compared with the original training process.
Graph neural networks (GNNs) start to gain momentum after showing significant performance improvement in a variety of domains including molecular science, recommendation, and transportation. Turning such performance improvement of GNNs into practical applications relies on effective and efficient execution, especially for inference. However, neither CPU nor GPU can meet these needs if considering both performance and energy efficiency. Thats because accelerating GNNs is challenging due to their excessive memory usage and arbitrary interleaving of diverse operations. Besides, the semantics gap between the high-level GNN programming model and efficient hardware makes it difficult in accelerating general-domain GNNs. To address the challenge, we propose Zipper, an efficient yet general acceleration system for GNNs. The keys to Zipper include a graph-native intermediate representation (IR) and the associated compiler. By capturing GNN primitive operations and representing with GNN IR, Zipper is able to fit GNN semantics into hardware structure for efficient execution. The IR also enables GNN-specific optimizations including sparse graph tiling and redundant operation elimination. We further present an hardware architecture design consisting of dedicated blocks for different primitive operations, along with a run-time scheduler to map a IR program to the hardware blocks. Our evaluation shows that Zipper achieves 93.6x speedup and 147x energy reduction over Intel Xeon CPU, and 1.56x speedup and 4.85x energy reduction over NVIDIA V100 GPU on averages.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا