Do you want to publish a course? Click here

Rosita: Towards Automatic Elimination of Power-Analysis Leakage in Ciphers

93   0   0.0 ( 0 )
 Added by Madura Shelton
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Since their introduction over two decades ago, side-channel attacks have presented a serious security threat. While many ciphers implementations employ masking techniques to protect against such attacks, they often leak secret information due to unintended interactions in the hardware. We present Rosita, a code rewrite engine that uses a leakage emulator which we amend to correctly emulate the micro-architecture of a target system. We use Rosita to automatically protect masked implementations of AES, ChaCha, and Xoodoo. For AES and Xoodoo, we show the absence of observable leakage at 1,000,000 traces with less than 21% penalty to the performance. For ChaCha, which has significantly more leakage, Rosita eliminates over 99% of the leakage, at a performance cost of 64%.

rate research

Read More

172 - Boris Ryabko 2016
We describe generalized running key ciphers and apply them for analysis of two Shannons methods. In particular, we suggest some estimation of the cipher equivocation and the probability of correct deciphering without key.
90 - Duc-Phong Le , Rongxing Lu , 2020
The advances of the Internet of Things (IoT) have had a fundamental impact and influence in sharping our rich living experiences. However, since IoT devices are usually resource-constrained, lightweight block ciphers have played a major role in serving as a building block for secure IoT protocols. In CHES 2015, SIMECK, a family of block ciphers, was designed for resource-constrained IoT devices. Since its publication, there have been many analyses on its security. In this paper, under the one bit-flip model, we propose a new efficient fault analysis attack on SIMECK ciphers. Compared to those previously reported attacks, our attack can recover the full master key by injecting faults into only a single round of all SIMECK family members. This property is crucial, as it is infeasible for an attacker to inject faults into different rounds of a SIMECK implementation on IoT devices in the real world. Specifically, our attack is characterized by exercising a deep analysis of differential trail between the correct and faulty immediate ciphertexts. Extensive simulation evaluations are conducted, and the results demonstrate the effectiveness and correctness of our proposed attack.
Cybercrime forums enable modern criminal entrepreneurs to collaborate with other criminals into increasingly efficient and sophisticated criminal endeavors. Understanding the connections between different products and services can often illuminate effective interventions. However, generating this understanding of supply chains currently requires time-consuming manual effort. In this paper, we propose a language-agnostic method to automatically extract supply chains from cybercrime forum posts and replies. Our supply chain detection algorithm can identify 36% and 58% relevant chains within major English and Russian forums, respectively, showing improvements over the baselines of 13% and 36%, respectively. Our analysis of the automatically generated supply chains demonstrates underlying connections between products and services within these forums. For example, the extracted supply chain illuminated the connection between hack-for-hire services and the selling of rare and valuable `OG accounts, which has only recently been reported. The understanding of connections between products and services exposes potentially effective intervention points.
422 - An-Ping Li 2008
we will present an estimation for the upper-bound of the amount of 16-bytes plaintexts for English texts, which indicates that the block ciphers with block length no more than 16-bytes will be subject to recover plaintext attacks in the occasions of plaintext -known or plaintext-chosen attacks.
Graph embeddings have been proposed to map graph data to low dimensional space for downstream processing (e.g., node classification or link prediction). With the increasing collection of personal data, graph embeddings can be trained on private and sensitive data. For the first time, we quantify the privacy leakage in graph embeddings through three inference attacks targeting Graph Neural Networks. We propose a membership inference attack to infer whether a graph node corresponding to individual users data was member of the models training or not. We consider a blackbox setting where the adversary exploits the output prediction scores, and a whitebox setting where the adversary has also access to the released node embeddings. This attack provides an accuracy up to 28% (blackbox) 36% (whitebox) beyond random guess by exploiting the distinguishable footprint between train and test data records left by the graph embedding. We propose a Graph Reconstruction attack where the adversary aims to reconstruct the target graph given the corresponding graph embeddings. Here, the adversary can reconstruct the graph with more than 80% of accuracy and link inference between two nodes around 30% more confidence than a random guess. We then propose an attribute inference attack where the adversary aims to infer a sensitive attribute. We show that graph embeddings are strongly correlated to node attributes letting the adversary inferring sensitive information (e.g., gender or location).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا