Robust Physical Hard-Label Attacks on Deep Learning Visual Classification


Abstract in English

The physical, black-box hard-label setting is arguably the most realistic threat model for cyber-physical vision systems. In this setting, the attacker only has query access to the model and only receives the top-1 class label without confidence information. Creating small physical stickers that are robust to environmental variation is difficult in the discrete and discontinuous hard-label space because the attack must both design a small shape to perturb within and find robust noise to fill it with. Unfortunately, we find that existing $ell_2$ or $ell_infty$ minimizing hard-label attacks do not easily extend to finding such robust physical perturbation attacks. Thus, we propose GRAPHITE, the first algorithm for hard-label physical attacks on computer vision models. We show that survivability, an estimate of physical variation robustness, can be used in new ways to generate small masks and is a sufficiently smooth function to optimize with gradient-free optimization. We use GRAPHITE to attack a traffic sign classifier and a publicly-available Automatic License Plate Recognition (ALPR) tool using only query access. We evaluate both tools in real-world field tests to measure its physical-world robustness. We successfully cause a Stop sign to be misclassified as a Speed Limit 30 km/hr sign in 95.7% of physical images and cause errors in 75% of physical images for the ALPR tool.

Download