ترغب بنشر مسار تعليمي؟ اضغط هنا

EI-MTD:Moving Target Defense for Edge Intelligence against Adversarial Attacks

67   0   0.0 ( 0 )
 نشر من قبل Yaguan Qian
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the boom of edge intelligence, its vulnerability to adversarial attacks becomes an urgent problem. The so-called adversarial example can fool a deep learning model on the edge node to misclassify. Due to the property of transferability, the adversary can easily make a black-box attack using a local substitute model. Nevertheless, the limitation of resource of edge nodes cannot afford a complicated defense mechanism as doing on the cloud data center. To overcome the challenge, we propose a dynamic defense mechanism, namely EI-MTD. It first obtains robust member models with small size through differential knowledge distillation from a complicated teacher model on the cloud data center. Then, a dynamic scheduling policy based on a Bayesian Stackelberg game is applied to the choice of a target model for service. This dynamic defense can prohibit the adversary from selecting an optimal substitute model for black-box attacks. Our experimental result shows that this dynamic scheduling can effectively protect edge intelligence against adversarial attacks under the black-box setting.

قيم البحث

اقرأ أيضاً

152 - Ali Borji 2020
Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture. This is perhaps the main reason why CNNs are vulnerable to adversarial examples. Here, we explore how sha pe bias can be incorporated into CNNs to improve their robustness. Two algorithms are proposed, based on the observation that edges are invariant to moderate imperceptible perturbations. In the first one, a classifier is adversarially trained on images with the edge map as an additional channel. At inference time, the edge map is recomputed and concatenated to the image. In the second algorithm, a conditional GAN is trained to translate the edge maps, from clean and/or perturbed images, into clean images. Inference is done over the generated image corresponding to the inputs edge map. Extensive experiments over 10 datasets demonstrate the effectiveness of the proposed algorithms against FGSM and $ell_infty$ PGD-40 attacks. Further, we show that a) edge information can also benefit other adversarial training methods, and b) CNNs trained on edge-augmented inputs are more robust against natural image corruptions such as motion blur, impulse noise and JPEG compression, than CNNs trained solely on RGB images. From a broader perspective, our study suggests that CNNs do not adequately account for image structures that are crucial for robustness. Code is available at:~url{https://github.com/aliborji/Shapedefence.git}.
Moving Target Defense (MTD) has emerged as a newcomer into the asymmetric field of attack and defense, and shuffling-based MTD has been regarded as one of the most effective ways to mitigate DDoS attacks. However, previous work does not acknowledge t hat frequent shuffles would significantly intensify the overhead. MTD requires a quantitative measure to compare the cost and effectiveness of available adaptations and explore the best trade-off between them. In this paper, therefore, we propose a new cost-effective shuffling method against DDoS attacks using MTD. By exploiting Multi-Objective Markov Decision Processes to model the interaction between the attacker and the defender, and designing a cost-effective shuffling algorithm, we study the best trade-off between the effectiveness and cost of shuffling in a given shuffling scenario. Finally, simulation and experimentation on an experimental software defined network (SDN) indicate that our approach imposes an acceptable shuffling overload and is effective in mitigating DDoS attacks.
127 - Bin Zhu , Zhaoquan Gu , Le Wang 2021
Recent work shows that deep neural networks are vulnerable to adversarial examples. Much work studies adversarial example generation, while very little work focuses on more critical adversarial defense. Existing adversarial detection methods usually make assumptions about the adversarial example and attack method (e.g., the word frequency of the adversarial example, the perturbation level of the attack method). However, this limits the applicability of the detection method. To this end, we propose TREATED, a universal adversarial detection method that can defend against attacks of various perturbation levels without making any assumptions. TREATED identifies adversarial examples through a set of well-designed reference models. Extensive experiments on three competitive neural networks and two widely used datasets show that our method achieves better detection performance than baselines. We finally conduct ablation studies to verify the effectiveness of our method.
In social networks, by removing some target-sensitive links, privacy protection might be achieved. However, some hidden links can still be re-observed by link prediction methods on observable networks. In this paper, the conventional link prediction method named Resource Allocation Index (RA) is adopted for privacy attacks. Several defense methods are proposed, including heuristic and evolutionary approaches, to protect targeted links from RA attacks via evolutionary perturbations. This is the first time to study privacy protection on targeted links against link-prediction-based attacks. Some links are randomly selected from the network as targeted links for experimentation. The simulation results on six real-world networks demonstrate the superiority of the evolutionary perturbation approach for target defense against RA attacks. Moreover, transferring experiments show that, although the evolutionary perturbation approach is designed to against RA attacks, it is also effective against other link-prediction-based attacks.
The Spectre vulnerability in modern processors has been widely reported. The key insight in this vulnerability is that speculative execution in processors can be misused to access the secrets. Subsequently, even though the speculatively executed inst ructions are squashed, the secret may linger in micro-architectural states such as cache, and can potentially be accessed by an attacker via side channels. In this paper, we propose oo7, a static analysis approach that can mitigate Spectre attacks by detecting potentially vulnerable code snippets in program binaries and protecting them against the attack by patching them. Our key contribution is to balance the concerns of effectiveness, analysis time and run-time overheads. We employ control flow extraction, taint analysis, and address analysis to detect tainted conditional branches and speculative memory accesses. oo7 can detect all fifteen purpose-built Spectre-vulnerable code patterns, whereas Microsoft compiler with Spectre mitigation option can only detect two of them. We also report the results of a large-scale study on applying oo7 to over 500 program binaries (average binary size 261 KB) from different real-world projects. We protect programs against Spectre attack by selectively inserting fences only at vulnerable conditional branches to prevent speculative execution. Our approach is experimentally observed to incur around 5.9% performance overheads on SPECint benchmarks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا