ﻻ يوجد ملخص باللغة العربية
Graph neural networks (GNNs) start to gain momentum after showing significant performance improvement in a variety of domains including molecular science, recommendation, and transportation. Turning such performance improvement of GNNs into practical applications relies on effective and efficient execution, especially for inference. However, neither CPU nor GPU can meet these needs if considering both performance and energy efficiency. Thats because accelerating GNNs is challenging due to their excessive memory usage and arbitrary interleaving of diverse operations. Besides, the semantics gap between the high-level GNN programming model and efficient hardware makes it difficult in accelerating general-domain GNNs. To address the challenge, we propose Zipper, an efficient yet general acceleration system for GNNs. The keys to Zipper include a graph-native intermediate representation (IR) and the associated compiler. By capturing GNN primitive operations and representing with GNN IR, Zipper is able to fit GNN semantics into hardware structure for efficient execution. The IR also enables GNN-specific optimizations including sparse graph tiling and redundant operation elimination. We further present an hardware architecture design consisting of dedicated blocks for different primitive operations, along with a run-time scheduler to map a IR program to the hardware blocks. Our evaluation shows that Zipper achieves 93.6x speedup and 147x energy reduction over Intel Xeon CPU, and 1.56x speedup and 4.85x energy reduction over NVIDIA V100 GPU on averages.
This paper summarizes the idea of Subarray-Level Parallelism (SALP) in DRAM, which was published in ISCA 2012, and examines the works significance and future potential. Modern DRAMs have multiple banks to serve multiple memory requests in parallel. H
Exploiting sparsity is a key technique in accelerating quantized convolutional neural network (CNN) inference on mobile devices. Prior sparse CNN accelerators largely exploit un-structured sparsity and achieve significant speedups. Due to the unbound
The combination of Winograds algorithm and systolic array architecture has demonstrated the capability of improving DSP efficiency in accelerating convolutional neural networks (CNNs) on FPGA platforms. However, handling arbitrary convolution kernel
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs. Training them efficiently is challenging due to the irregular nature of graph data. The problem becomes even mo
Recently, researches have explored the graph neural network (GNN) techniques on text classification, since GNN does well in handling complex structures and preserving global information. However, previous methods based on GNN are mainly faced with th