ترغب بنشر مسار تعليمي؟ اضغط هنا

Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising

335   0   0.0 ( 0 )
 نشر من قبل Sid Ahmed Fezza
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite the enormous performance of deepneural networks (DNNs), recent studies have shown theirvulnerability to adversarial examples (AEs), i.e., care-fully perturbed inputs designed to fool the targetedDNN. Currently, the literature is rich with many ef-fective attacks to craft such AEs. Meanwhile, many de-fenses strategies have been developed to mitigate thisvulnerability. However, these latter showed their effec-tiveness against specific attacks and does not general-ize well to different attacks. In this paper, we proposea framework for defending DNN classifier against ad-versarial samples. The proposed method is based on atwo-stage framework involving a separate detector anda denoising block. The detector aims to detect AEs bycharacterizing them through the use of natural scenestatistic (NSS), where we demonstrate that these statis-tical features are altered by the presence of adversarialperturbations. The denoiser is based on block matching3D (BM3D) filter fed by an optimum threshold valueestimated by a convolutional neural network (CNN) toproject back the samples detected as AEs into theirdata manifold. We conducted a complete evaluation onthree standard datasets namely MNIST, CIFAR-10 andTiny-ImageNet. The experimental results show that theproposed defense method outperforms the state-of-the-art defense techniques by improving the robustnessagainst a set of attacks under black-box, gray-box and white-box settings. The source code is available at: https://github.com/kherchouche-anouar/2DAE



قيم البحث

اقرأ أيضاً

Deep neural networks have demonstrated cutting edge performance on various tasks including classification. However, it is well known that adversarially designed imperceptible perturbation of the input can mislead advanced classifiers. In this paper, Permutation Phase Defense (PPD), is proposed as a novel method to resist adversarial attacks. PPD combines random permutation of the image with phase component of its Fourier transform. The basic idea behind this approach is to turn adversarial defense problems analogously into symmetric cryptography, which relies solely on safekeeping of the keys for security. In PPD, safe keeping of the selected permutation ensures effectiveness against adversarial attacks. Testing PPD on MNIST and CIFAR-10 datasets yielded state-of-the-art robustness against the most powerful adversarial attacks currently available.
Deep neural networks (DNNs) are vulnerable to adversarial examples with small perturbations. Adversarial defense thus has been an important means which improves the robustness of DNNs by defending against adversarial examples. Existing defense method s focus on some specific types of adversarial examples and may fail to defend well in real-world applications. In practice, we may face many types of attacks where the exact type of adversarial examples in real-world applications can be even unknown. In this paper, motivated by that adversarial examples are more likely to appear near the classification boundary, we study adversarial examples from a new perspective that whether we can defend against adversarial examples by pulling them back to the original clean distribution. We theoretically and empirically verify the existence of defense affine transformations that restore adversarial examples. Relying on this, we learn a defense transformer to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs. Extensive experiments on both toy and real-world datasets demonstrate the effectiveness and generalization of our defense transformer.
Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential ris ks on safety and security critical applications. However, existing defense approaches are still vulnerable to attacks, especially in a white-box attack scenario. To address this issue, we propose a new defense approach, named MulDef, based on robustness diversity. Our approach consists of (1) a general defense framework based on multiple models and (2) a technique for generating these multiple models to achieve high defense capability. In particular, given a target model, our framework includes multiple models (constructed from the target model) to form a model family. The model family is designed to achieve robustness diversity (i.e., an adversarial example successfully attacking one model cannot succeed in attacking other models in the family). At runtime, a model is randomly selected from the family to be applied on each input example. Our general framework can inspire rich future research to construct a desirable model family achieving higher robustness diversity. Our evaluation results show that MulDef (with only up to 5 models in the family) can substantially improve the target models accuracy on adversarial examples by 22-74% in a white-box attack scenario, while maintaining similar accuracy on legitimate examples.
Graph deep learning models, such as graph convolutional networks (GCN) achieve remarkable performance for tasks on graph data. Similar to other types of deep models, graph deep learning models often suffer from adversarial attacks. However, compared with non-graph data, the discrete features, graph connections and different definitions of imperceptible perturbations bring unique challenges and opportunities for the adversarial attacks and defenses for graph data. In this paper, we propose both attack and defense techniques. For attack, we show that the discreteness problem could easily be resolved by introducing integrated gradients which could accurately reflect the effect of perturbing certain features or edges while still benefiting from the parallel computations. For defense, we observe that the adversarially manipulated graph for the targeted attack differs from normal graphs statistically. Based on this observation, we propose a defense approach which inspects the graph and recovers the potential adversarial perturbations. Our experiments on a number of datasets show the effectiveness of the proposed methods.
Deep neural network (DNN) has demonstrated its success in multiple domains. However, DNN models are inherently vulnerable to adversarial examples, which are generated by adding adversarial perturbations to benign inputs to fool the DNN model to miscl assify. In this paper, we present a cross-layer strategic ensemble framework and a suite of robust defense algorithms, which are attack-independent, and capable of auto-repairing and auto-verifying the target model being attacked. Our strategic ensemble approach makes three original contributions. First, we employ input-transformation diversity to design the input-layer strategic transformation ensemble algorithms. Second, we utilize model-disagreement diversity to develop the output-layer strategic model ensemble algorithms. Finally, we create an input-output cross-layer strategic ensemble defense that strengthens the defensibility by combining diverse input transformation based model ensembles with diverse output verification model ensembles. Evaluated over 10 attacks on ImageNet dataset, we show that our strategic ensemble defense algorithms can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false negative rates, compared to existing representative defense methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا