Do you want to publish a course? Click here

DeepStrike: Remotely-Guided Fault Injection Attacks on DNN Accelerator in Cloud-FPGA

126   0   0.0 ( 0 )
 Added by Yukui Luo
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

As Field-programmable gate arrays (FPGAs) are widely adopted in clouds to accelerate Deep Neural Networks (DNN), such virtualization environments have posed many new security issues. This work investigates the integrity of DNN FPGA accelerators in clouds. It proposes DeepStrike, a remotely-guided attack based on power glitching fault injections targeting DNN execution. We characterize the vulnerabilities of different DNN layers against fault injections on FPGAs and leverage time-to-digital converter (TDC) sensors to precisely control the timing of fault injections. Experimental results show that our proposed attack can successfully disrupt the FPGA DSP kernel and misclassify the target victim DNN application.



rate research

Read More

Fault injections are increasingly used to attack/test secure applications. In this paper, we define formal models of runtime monitors that can detect fault injections that result in test inversion attacks and arbitrary jumps in the control flow. Runtime verification monitors offer several advantages. The code implementing a monitor is small compared to the entire application code. Monitors have a formal semantics; and we prove that they effectively detect attacks. Each monitor is a module dedicated to detecting an attack and can be deployed as needed to secure the application. A monitor can run separately from the application or it can be ``weaved inside the application. Our monitors have been validated by detecting simulated attacks on a program that verifies a user PIN.
Todays mobile devices contain densely packaged system-on-chips (SoCs) with multi-core, high-frequency CPUs and complex pipelines. In parallel, sophisticated SoC-assisted security mechanisms have become commonplace for protecting device data, such as trusted execution environments, full-disk and file-based encryption. Both advancements have dramatically complicated the use of conventional physical attacks, requiring the development of specialised attacks. In this survey, we consolidate recent developments in physical fault injections and side-channel attacks on modern mobile devices. In total, we comprehensively survey over 50 fault injection and side-channel attack papers published between 2009-2021. We evaluate the prevailing methods, compare existing attacks using a common set of criteria, identify several challenges and shortcomings, and suggest future directions of research.
88 - Ge Li , Mohit Tiwari , 2021
With proliferation of DNN-based applications, the confidentiality of DNN model is an important commercial goal. Spatial accelerators, that parallelize matrix/vector operations, are utilized for enhancing energy efficiency of DNN computation. Recently, model extraction attacks on simple accelerators, either with a single processing element or running a binarized network, were demonstrated using the methodology derived from differential power analysis (DPA) attack on cryptographic devices. This paper investigates the vulnerability of realistic spatial accelerators using general, 8-bit, number representation. We investigate two systolic array architectures with weight-stationary dataflow: (1) a 3 $times$ 1 array for a dot-product operation, and (2) a 3 $times$ 3 array for matrix-vector multiplication. Both are implemented on the SAKURA-G FPGA board. We show that both architectures are ultimately vulnerable. A conventional DPA succeeds fully on the 1D array, requiring 20K power measurements. However, the 2D array exhibits higher security even with 460K traces. We show that this is because the 2D array intrinsically entails multiple MACs simultaneously dependent on the same input. However, we find that a novel template-based DPA with multiple profiling phases is able to fully break the 2D array with only 40K traces. Corresponding countermeasures need to be investigated for spatial DNN accelerators.
386 - Jakub Breier , Wei He 2017
Internet of Things connects lots of small constrained devices to the Internet. As in any other environment, communication security is important and cryptographic algorithms are one of many elements that we use in order to keep messages secure. Because of the constrained nature of these environments, it is necessary to use algorithms that do not require high computational power. Lightweight ciphers are therefore ideal candidates for this purpose. In this paper, we explore a possibility of attacking an ultra-lightweight cipher PRESENT by using a multiple fault attack. Utilizing the Differential Fault Analysis technique, we were able to recover the secret key with two faulty encryptions and an exhaustive search of 2^16 remaining key bits. Our attack aims at four nibbles in the penultimate round of the cipher, causing faulty output in all nibbles of the output. We also provide a practical attack scenario by exploiting Hardware Trojan (HT) technique for the proposed fault injection in a Xilinx Spartan-6 FPGA.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا