ﻻ يوجد ملخص باللغة العربية
With the growth of interest in the attack and defense of deep neural networks, researchers are focusing more on the robustness of applying them to devices with limited memory. Thus, unlike adversarial training, which only considers the balance between accuracy and robustness, we come to a more meaningful and critical issue, i.e., the balance among accuracy, efficiency and robustness (AER). Recently, some related works focused on this issue, but with different observations, and the relations among AER remain unclear. This paper first investigates the robustness of pruned models with different compression ratios under the gradual pruning process and concludes that the robustness of the pruned model drastically varies with different pruning processes, especially in response to attacks with large strength. Second, we test the performance of mixing the clean data and adversarial examples (generated with a prescribed uniform budget) into the gradual pruning process, called adversarial pruning, and find the following: the pruned models robustness exhibits high sensitivity to the budget. Furthermore, to better balance the AER, we propose an approach called blind adversarial pruning (BAP), which introduces the idea of blind adversarial training into the gradual pruning process. The main idea is to use a cutoff-scale strategy to adaptively estimate a nonuniform budget to modify the AEs used during pruning, thus ensuring that the strengths of AEs are dynamically located within a reasonable range at each pruning step and ultimately improving the overall AER of the pruned model. The experimental results obtained using BAP for pruning classification models based on several benchmarks demonstrate the competitive performance of this method: the robustness of the model pruned by BAP is more stable among varying pruning processes, and BAP exhibits better overall AER than adversarial pruning.
We present a differentiable joint pruning and quantization (DJPQ) scheme. We frame neural network compression as a joint gradient-based optimization problem, trading off between model pruning and quantization automatically for hardware efficiency. DJ
We identify a trade-off between robustness and accuracy that serves as a guiding principle in the design of defenses against adversarial examples. Although this problem has been widely studied empirically, much remains unknown concerning the theory u
Adversarial training augments the training set with perturbations to improve the robust error (over worst-case perturbations), but it often leads to an increase in the standard error (on unperturbed test inputs). Previous explanations for this tradeo
Existing generalization measures that aim to capture a models simplicity based on parameter counts or norms fail to explain generalization in overparameterized deep neural networks. In this paper, we introduce a new, theoretically motivated measure o
Todays state-of-the-art image classifiers fail to correctly classify carefully manipulated adversarial images. In this work, we develop a new, localized adversarial attack that generates adversarial examples by imperceptibly altering the backgrounds