ﻻ يوجد ملخص باللغة العربية
The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks. Although having shown excellent performances, they lack the mechanism to encode the rich, structured information among elements in an image. In this paper, to theoretically analyze the property of these nonlocal-based blocks, we provide a unified approach to interpreting them, where we view them as a graph filter generated on a fully-connected graph. When the graph filter is approximated by Chebyshev polynomials, a generalized formulation can be derived for explaining the existing nonlocal-based blocks ($mathit{e.g.,}$ nonlocal block, nonlocal stage, double attention block). Furthermore, we propose an efficient and robust spectral nonlocal block, which can be flexibly inserted into deep neural networks to catch the long-range dependencies between spatial pixels or temporal frames. Experimental results demonstrate the clear-cut improvements and practical applicabilities of the spectral nonlocal block on image classification (Cifar-10/100, ImageNet), fine-grained image classification (CUB-200), action recognition (UCF-101), and person re-identification (ILID-SVID, Mars, Prid-2011) tasks.
The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks. Although having shown excellent performance, they still lack the mechanism to encode the rich, structured information among elemen
As a generic modeling tool, Convolutional Neural Networks (CNNs) have been widely employed in image generation and translation tasks. However, when fed with a flat input, current CNN models may fail to generate vivid results due to the spatially shar
In order to deploy deep convolutional neural networks (CNNs) on resource-limited devices, many model pruning methods for filters and weights have been developed, while only a few to layer pruning. However, compared with filter pruning and weight prun
Recurrent Neural Networks (RNNs) are used in state-of-the-art models in domains such as speech recognition, machine translation, and language modelling. Sparsity is a technique to reduce compute and memory requirements of deep learning models. Sparse
Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e.g., image classification, natural language processing, etc. Despite the good performance, the huge number of parameters in DNNs brings challenges to