ﻻ يوجد ملخص باللغة العربية
Face recognition (FR) systems have been widely applied in safety-critical fields with the introduction of deep learning. However, the existence of adversarial examples brings potential security risks to FR systems. To identify their vulnerability and help improve their robustness, in this paper, we propose Meaningful Adversarial Stickers, a physically feasible and easily implemented attack method by using meaningful real stickers existing in our life, where the attackers manipulate the pasting parameters of stickers on the face, instead of designing perturbation patterns and then printing them like most existing works. We conduct attacks in the black-box setting with limited information which is more challenging and practical. To effectively solve the pasting position, rotation angle, and other parameters of the stickers, we design Region based Heuristic Differential Algorithm, which utilizes the inbreeding strategy based on regional aggregation of effective solutions and the adaptive adjustment strategy of evaluation criteria. Extensive experiments are conducted on two public datasets including LFW and CelebA with respective to three representative FR models like FaceNet, SphereFace, and CosFace, achieving attack success rates of 81.78%, 72.93%, and 79.26% respectively with only hundreds of queries. The results in the physical world confirm the effectiveness of our method in complex physical conditions. When continuously changing the face posture of testers, the method can still perform successful attacks up to 98.46%, 91.30% and 86.96% in the time series.
Deep face recognition (FR) has achieved significantly high accuracy on several challenging datasets and fosters successful real-world applications, even showing high robustness to the illumination variation that is usually regarded as a main threat t
Most machine learning models are validated and tested on fixed datasets. This can give an incomplete picture of the capabilities and weaknesses of the model. Such weaknesses can be revealed at test time in the real world. The risks involved in such f
It is known that deep neural networks (DNNs) are vulnerable to adversarial attacks. The so-called physical adversarial examples deceive DNN-based decisionmakers by attaching adversarial patches to real objects. However, most of the existing works on
Deep learning models are vulnerable to adversarial examples. As a more threatening type for practical deep learning systems, physical adversarial examples have received extensive research attention in recent years. However, without exploiting the int
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world