ﻻ يوجد ملخص باللغة العربية
Deep Convolutional Neural Networks (CNNs) have become state-of-the art for computer vision and other signal processing tasks due to their superior accuracy. In recent years, large efforts have been made to reduce the computational costs of CNNs in order to achieve real-time operation on low-power embedded devices. Towards this goal we present BinArray, a custom hardware accelerator for CNNs with binary approximated weights. The binary approximation used in this paper is an improved version of a network compression technique initially suggested in [1]. It drastically reduces the number of multiplications required per inference with no or very little accuracy degradation. BinArray easily scales and allows to compromise between hardware resource usage and throughput by means of three design parameters transparent to the user. Furthermore, it is possible to select between high accuracy or throughput dynamically during runtime. BinArray has been optimized at the register transfer level and operates at 400 MHz as instruction-set processor within a heterogenous XC7Z045-2 FPGA-SoC platform. Experimental results show that BinArray scales to match the performance of other accelerators like EdgeTPU [2] for different network sizes. Even for the largest MobileNet only 50% of the target device and only 96 DSP blocks are utilized.
High Bandwidth Memory (HBM) provides massive aggregated memory bandwidth by exposing multiple memory channels to the processing units. To achieve high performance, an accelerator built on top of an FPGA configured with HBM (i.e., FPGA-HBM platform) n
Genomics is the foundation of precision medicine, global food security and virus surveillance. Exact-match is one of the most essential operations widely used in almost every step of genomics such as alignment, assembly, annotation, and compression.
Transfer learning in natural language processing (NLP), as realized using models like BERT (Bi-directional Encoder Representation from Transformer), has significantly improved language representation with models that can tackle challenging language p
Energy efficiency and computing flexibility are some of the primary design constraints of heterogeneous computing. In this paper, we present FlashAbacus, a data-processing accelerator that self-governs heterogeneous kernel executions and data storage
Implementing embedded neural network processing at the edge requires efficient hardware acceleration that couples high computational performance with low power consumption. Driven by the rapid evolution of network architectures and their algorithmic