Can Targeted Adversarial Examples Transfer When the Source and Target Models Have No Label Space Overlap?


الملخص بالإنكليزية

We design blackbox transfer-based targeted adversarial attacks for an environment where the attackers source model and the target blackbox model may have disjoint label spaces and training datasets. This scenario significantly differs from the standard blackbox setting, and warrants a unique approach to the attacking process. Our methodology begins with the construction of a class correspondence matrix between the whitebox and blackbox label sets. During the online phase of the attack, we then leverage representations of highly related proxy classes from the whitebox distribution to fool the blackbox model into predicting the desired target class. Our attacks are evaluated in three complex and challenging test environments where the source and target models have varying degrees of conceptual overlap amongst their unique categories. Ultimately, we find that it is indeed possible to construct targeted transfer-based adversarial attacks between models that have non-overlapping label spaces! We also analyze the sensitivity of attack success to properties of the clean data. Finally, we show that our transfer attacks serve as powerful adversarial priors when integrated with query-based methods, markedly boosting query efficiency and adversarial success.

تحميل البحث