ﻻ يوجد ملخص باللغة العربية
Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes. It is also known that an attacker can generate strong adversarial examples if she knows the classifier parameters. Conversely, a defender can robustify the classifier by retraining if she has access to the adversarial examples. We explain and formulate this adversarial example problem as a two-player continuous zero-sum game, and demonstrate the fallacy of evaluating a defense or an attack as a static problem. To find the best worst-case defense against whitebox attacks, we propose a continuous minimax optimization algorithm. We demonstrate the minimax defense with two types of attack classes -- gradient-based and neural network-based attacks. Experiments with the MNIST and the CIFAR-10 datasets demonstrate that the defense found by numerical minimax optimization is indeed more robust than non-minimax defenses. We discuss directions for improving the result toward achieving robustness against multiple types of attack classes.
Deep neural network (DNN) has demonstrated its success in multiple domains. However, DNN models are inherently vulnerable to adversarial examples, which are generated by adding adversarial perturbations to benign inputs to fool the DNN model to miscl
Deep neural networks enjoy a powerful representation and have proven effective in a number of applications. However, recent advances show that deep neural networks are vulnerable to adversarial attacks incurred by the so-called adversarial examples.
Deep neural networks have demonstrated cutting edge performance on various tasks including classification. However, it is well known that adversarially designed imperceptible perturbation of the input can mislead advanced classifiers. In this paper,
Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential ris
Adversarial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models. However, current studies on adversarial examples focus on supervised learning tasks, relying on the ground-truth data