A-FMI: Learning Attributions from Deep Networks via Feature Map Importance


Abstract in English

Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs). However, the redundancy of attribution features and the gradient saturation problem, which weaken the ability to identify significant features and cause an explanation focus shift, are challenges that attribution methods still face. In this work, we propose: 1) an essential characteristic, Strong Relevance, when selecting attribution features; 2) a new concept, feature map importance (FMI), to refine the contribution of each feature map, which is faithful to the CNN model; and 3) a novel attribution method via FMI, termed A-FMI, to address the gradient saturation problem, which couples the target image with a reference image, and assigns the FMI to the difference-from-reference at the granularity of feature map. Through visual inspections and qualitative evaluations on the ImageNet dataset, we show the compelling advantages of A-FMI on its faithfulness, insensitivity to the choice of reference, class discriminability, and superior explanation performance compared with popular attribution methods across varying CNN architectures.

Download