ﻻ يوجد ملخص باللغة العربية
Deep neural networks have achieved impressive performance in various areas, but they are shown to be vulnerable to adversarial attacks. Previous works on adversarial attacks mainly focused on the single-task setting. However, in real applications, it is often desirable to attack several models for different tasks simultaneously. To this end, we propose Multi-Task adversarial Attack (MTA), a unified framework that can craft adversarial examples for multiple tasks efficiently by leveraging shared knowledge among tasks, which helps enable large-scale applications of adversarial attacks on real-world systems. More specifically, MTA uses a generator for adversarial perturbations which consists of a shared encoder for all tasks and multiple task-specific decoders. Thanks to the shared encoder, MTA reduces the storage cost and speeds up the inference when attacking multiple tasks simultaneously. Moreover, the proposed framework can be used to generate per-instance and universal perturbations for targeted and non-targeted attacks. Experimental results on the Office-31 and NYUv2 datasets demonstrate that MTA can improve the quality of attacks when compared with its single-task counterpart.
Deep learning on graph structures has shown exciting results in various applications. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In
Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs. However, these adversarial examples are most successful in white-box settings where the model and its parameters are availab
Recently, the textual adversarial attack models become increasingly popular due to their successful in estimating the robustness of NLP models. However, existing works have obvious deficiencies. (1) They usually consider only a single granularity of
Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding imperceptible perturbations to inputs. Recently different attacks and strategies have been proposed, but how to generate adversarial examples perceptually
Recent years have witnessed the emergence and development of graph neural networks (GNNs), which have been shown as a powerful approach for graph representation learning in many tasks, such as node classification and graph classification. The researc