The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks. Although having shown excellent performances, they lack the mechanism to encode the rich, structured information among elements in an image. In this paper, to theoretically analyze the property of these nonlocal-based blocks, we provide a unified approach to interpreting them, where we view them as a graph filter generated on a fully-connected graph. When the graph filter is approximated by Chebyshev polynomials, a generalized formulation can be derived for explaining the existing nonlocal-based blocks ($mathit{e.g.,}$ nonlocal block, nonlocal stage, double attention block). Furthermore, we propose an efficient and robust spectral nonlocal block, which can be flexibly inserted into deep neural networks to catch the long-range dependencies between spatial pixels or temporal frames. Experimental results demonstrate the clear-cut improvements and practical applicabilities of the spectral nonlocal block on image classification (Cifar-10/100, ImageNet), fine-grained image classification (CUB-200), action recognition (UCF-101), and person re-identification (ILID-SVID, Mars, Prid-2011) tasks.