Kryptonite: An Adversarial Attack Using Regional Focus


Abstract in English

With the Rise of Adversarial Machine Learning and increasingly robust adversarial attacks, the security of applications utilizing the power of Machine Learning has been questioned. Over the past few years, applications of Deep Learning using Deep Neural Networks(DNN) in several fields including Medical Diagnosis, Security Systems, Virtual Assistants, etc. have become extremely commonplace, and hence become more exposed and susceptible to attack. In this paper, we present a novel study analyzing the weaknesses in the security of deep learning systems. We propose Kryptonite, an adversarial attack on images. We explicitly extract the Region of Interest (RoI) for the images and use it to add imperceptible adversarial perturbations to images to fool the DNN. We test our attack on several DNNs and compare our results with state of the art adversarial attacks like Fast Gradient Sign Method (FGSM), DeepFool (DF), Momentum Iterative Fast Gradient Sign Method (MIFGSM), and Projected Gradient Descent (PGD). The results obtained by us cause a maximum drop in network accuracy while yielding minimum possible perturbation and in considerably less amount of time per sample. We thoroughly evaluate our attack against three adversarial defence techniques and the promising results showcase the efficacy of our attack.

Download