Explainable AI and susceptibility to adversarial attacks: a case study in classification of breast ultrasound images


الملخص بالإنكليزية

Ultrasound is a non-invasive imaging modality that can be conveniently used to classify suspicious breast nodules and potentially detect the onset of breast cancer. Recently, Convolutional Neural Networks (CNN) techniques have shown promising results in classifying ultrasound images of the breast into benign or malignant. However, CNN inference acts as a black-box model, and as such, its decision-making is not interpretable. Therefore, increasing effort has been dedicated to explaining this process, most notably through GRAD-CAM and other techniques that provide visual explanations into inner workings of CNNs. In addition to interpretation, these methods provide clinically important information, such as identifying the location for biopsy or treatment. In this work, we analyze how adversarial assaults that are practically undetectable may be devised to alter these importance maps dramatically. Furthermore, we will show that this change in the importance maps can come with or without altering the classification result, rendering them even harder to detect. As such, care must be taken when using these importance maps to shed light on the inner workings of deep learning. Finally, we utilize Multi-Task Learning (MTL) and propose a new network based on ResNet-50 to improve the classification accuracies. Our sensitivity and specificity is comparable to the state of the art results.

تحميل البحث