Do you want to publish a course? Click here

Power-Based Attacks on Spatial DNN Accelerators

89   0   0.0 ( 0 )
 Added by Ge Li
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

With proliferation of DNN-based applications, the confidentiality of DNN model is an important commercial goal. Spatial accelerators, that parallelize matrix/vector operations, are utilized for enhancing energy efficiency of DNN computation. Recently, model extraction attacks on simple accelerators, either with a single processing element or running a binarized network, were demonstrated using the methodology derived from differential power analysis (DPA) attack on cryptographic devices. This paper investigates the vulnerability of realistic spatial accelerators using general, 8-bit, number representation. We investigate two systolic array architectures with weight-stationary dataflow: (1) a 3 $times$ 1 array for a dot-product operation, and (2) a 3 $times$ 3 array for matrix-vector multiplication. Both are implemented on the SAKURA-G FPGA board. We show that both architectures are ultimately vulnerable. A conventional DPA succeeds fully on the 1D array, requiring 20K power measurements. However, the 2D array exhibits higher security even with 460K traces. We show that this is because the 2D array intrinsically entails multiple MACs simultaneously dependent on the same input. However, we find that a novel template-based DPA with multiple profiling phases is able to fully break the 2D array with only 40K traces. Corresponding countermeasures need to be investigated for spatial DNN accelerators.



rate research

Read More

Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
As Field-programmable gate arrays (FPGAs) are widely adopted in clouds to accelerate Deep Neural Networks (DNN), such virtualization environments have posed many new security issues. This work investigates the integrity of DNN FPGA accelerators in clouds. It proposes DeepStrike, a remotely-guided attack based on power glitching fault injections targeting DNN execution. We characterize the vulnerabilities of different DNN layers against fault injections on FPGAs and leverage time-to-digital converter (TDC) sensors to precisely control the timing of fault injections. Experimental results show that our proposed attack can successfully disrupt the FPGA DSP kernel and misclassify the target victim DNN application.
Graph modeling allows numerous security problems to be tackled in a general way, however, little work has been done to understand their ability to withstand adversarial attacks. We design and evaluate two novel graph attacks against a state-of-the-art network-level, graph-based detection system. Our work highlights areas in adversarial machine learning that have not yet been addressed, specifically: graph-based clustering techniques, and a global feature space where realistic attackers without perfect knowledge must be accounted for (by the defenders) in order to be practical. Even though less informed attackers can evade graph clustering with low cost, we show that some practical defenses are possible.
Additive Manufacturing (AM), a.k.a. 3D Printing, is increasingly used to manufacture functional parts of safety-critical systems. AMs dependence on computerization raises the concern that the AM process can be tampered with, and a parts mechanical properties sabotaged. This can lead to the destruction of a system employing the sabotaged part, causing loss of life, financial damage, and reputation loss. To address this threat, we propose a novel approach for detecting sabotage attacks. Our approach is based on continuous monitoring of the current delivered to all actuators during the manufacturing process and detection of deviations from a provable benign process. The proposed approach has numerous advantages: (i) it is non-invasive in a time-critical process, (ii) it can be retrofitted in legacy systems, and (iii) it is airgapped from the computerized components of the AM process, preventing simultaneous compromise. Evaluation on a desktop 3D Printer detects all attacks involving a modification of X or Y motor movement, with false positives at 0%.
136 - Yusi Lei , Sen Chen , Lingling Fan 2020
Machine learning (ML) based approaches have been the mainstream solution for anti-phishing detection. When they are deployed on the client-side, ML-based classifiers are vulnerable to evasion attacks. However, such potential threats have received relatively little attention because existing attacks destruct the functionalities or appearance of webpages and are conducted in the white-box scenario, making it less practical. Consequently, it becomes imperative to understand whether it is possible to launch evasion attacks with limited knowledge of the classifier, while preserving the functionalities and appearance. In this work, we show that even in the grey-, and black-box scenarios, evasion attacks are not only effective on practical ML-based classifiers, but can also be efficiently launched without destructing the functionalities and appearance. For this purpose, we propose three mutation-based attacks, differing in the knowledge of the target classifier, addressing a key technical challenge: automatically crafting an adversarial sample from a known phishing website in a way that can mislead classifiers. To launch attacks in the white- and grey-box scenarios, we also propose a sample-based collision attack to gain the knowledge of the target classifier. We demonstrate the effectiveness and efficiency of our evasion attacks on the state-of-the-art, Googles phishing page filter, achieved 100% attack success rate in less than one second per website. Moreover, the transferability attack on BitDefenders industrial phishing page classifier, TrafficLight, achieved up to 81.25% attack success rate. We further propose a similarity-based method to mitigate such evasion attacks, Pelican. We demonstrate that Pelican can effectively detect evasion attacks. Our findings contribute to design more robust phishing website classifiers in practice.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا