Do you want to publish a course? Click here

POW-HOW: An enduring timing side-channel to evadeonline malware sandboxes

119   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Online malware scanners are one of the best weapons in the arsenal of cybersecurity companies and researchers. A fundamental part of such systems is the sandbox that provides an instrumented and isolated environment (virtualized or emulated) for any user to upload and run unknown artifacts and identify potentially malicious behaviors. The provided API and the wealth of information inthe reports produced by these services have also helped attackers test the efficacy of numerous techniques to make malware hard to detect.The most common technique used by malware for evading the analysis system is to monitor the execution environment, detect the presence of any debugging artifacts, and hide its malicious behavior if needed. This is usually achieved by looking for signals suggesting that the execution environment does not belong to a the native machine, such as specific memory patterns or behavioral traits of certain CPU instructions. In this paper, we show how an attacker can evade detection on such online services by incorporating a Proof-of-Work (PoW) algorithm into a malware sample. Specifically, we leverage the asymptotic behavior of the computational cost of PoW algorithms when they run on some classes of hardware platforms to effectively detect a non bare-metal environment of the malware sandbox analyzer. To prove the validity of this intuition, we design and implement the POW-HOW framework, a tool to automatically implement sandbox detection strategies and embed a test evasion program into an arbitrary malware sample. Our empirical evaluation shows that the proposed evasion technique is durable, hard to fingerprint, and reduces existing malware detection rate by a factor of 10. Moreover, we show how bare-metal environments cannot scale with actual malware submissions rates for consumer services.



rate research

Read More

138 - Paul Maxwell , David Niblick , 2020
Cybersecurity continues to be a difficult issue for society especially as the number of networked systems grows. Techniques to protect these systems range from rules-based to artificial intelligence-based intrusion detection systems and anti-virus tools. These systems rely upon the information contained in the network packets and download executables to function. Side channel information leaked from hardware has been shown to reveal secret information in systems such as encryption keys. This work demonstrates that side channel information can be used to detect malware running on a computing platform without access to the code involved.
We propose a method, based on program analysis and transformation, for eliminating timing side channels in software code that implements security-critical applications. Our method takes as input the original program together with a list of secret variables (e.g., cryptographic keys, security tokens, or passwords) and returns the transformed program as output. The transformed program is guaranteed to be functionally equivalent to the original program and free of both instruction- and cache-timing side channels. Specifically, we ensure that the number of CPU cycles taken to execute any path is independent of the secret data, and the cache behavior of memory accesses, in terms of hits and misses, is independent of the secret data. We have implemented our method in LLVM and validated its effectiveness on a large set of applications, which are cryptographic libraries with 19,708 lines of C/C++ code in total. Our experiments show the method is both scalable for real applications and effective in eliminating timing side channels.
GPUs are increasingly being used in security applications, especially for accelerating encryption/decryption. While GPUs are an attractive platform in terms of performance, the security of these devices raises a number of concerns. One vulnerability is the data-dependent timing information, which can be exploited by adversary to recover the encryption key. Memory system features are frequently exploited since they create detectable timing variations. In this paper, our attack model is a coalescing attack, which leverages a critical GPU microarchitectural feature -- the coalescing unit. As multiple concurrent GPU memory requests can refer to the same cache block, the coalescing unit collapses them into a single memory transaction. The access time of an encryption kernel is dependent on the number of transactions. Correlation between a guessed key value and the associated timing samples can be exploited to recover the secret key. In this paper, a series of hardware/software countermeasures are proposed to obfuscate the memory timing side channel, making the GPU more resilient without impacting performance. Our hardware-based approach attempts to randomize the width of the coalescing unit to lower the signal-to-noise ratio. We present a hierarchical Miss Status Holding Register (MSHR) design that can merge transactions across different warps. This feature boosts performance, while, at the same time, secures the execution. We also present a software-based approach to permute the organization of critical data structures, significantly changing the coalescing behavior and introducing a high degree of randomness. Equipped with our new protections, the effort to launch a successful attack is increased up to 1433X . 178X, while also improving encryption/decryption performance up to 7%.
Deep learning is gaining importance in many applications. However, Neural Networks face several security and privacy threats. This is particularly significant in the scenario where Cloud infrastructures deploy a service with Neural Network model at the back end. Here, an adversary can extract the Neural Network parameters, infer the regularization hyperparameter, identify if a data point was part of the training data, and generate effective transferable adversarial examples to evade classifiers. This paper shows how a Neural Network model is susceptible to timing side channel attack. In this paper, a black box Neural Network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network. Although, constructing an equivalent architecture is a complex search problem, it is shown how Reinforcement Learning with knowledge distillation can effectively reduce the search space to infer a target model. The proposed approach has been tested with VGG architectures on CIFAR10 data set. It is observed that it is possible to reconstruct substitute models with test accuracy close to the target models and the proposed approach is scalable and independent of type of Neural Network architectures.
In recent years, the convolutional neural networks (CNNs) have received a lot of interest in the side-channel community. The previous work has shown that CNNs have the potential of breaking the cryptographic algorithm protected with masking or desynchronization. Before, several CNN models have been exploited, reaching the same or even better level of performance compared to the traditional side-channel attack (SCA). In this paper, we investigate the architecture of Residual Network and build a new CNN model called attention network. To enhance the power of the attention network, we introduce an attention mechanism - Convolutional Block Attention Module (CBAM) and incorporate CBAM into the CNN architecture. CBAM points out the informative points of the input traces and makes the attention network focus on the relevant leakages of the measurements. It is able to improve the performance of the CNNs. Because the irrelevant points will introduce the extra noises and cause a worse performance of attacks. We compare our attention network with the one designed for the masking AES implementation called ASCAD network in this paper. We show that the attention network has a better performance than the ASCAD network. Finally, a new visualization method, named Class Gradient Visualization (CGV) is proposed to recognize which points of the input traces have a positive influence on the predicted result of the neural networks. In another aspect, it can explain why the attention network is superior to the ASCAD network. We validate the attention network through extensive experiments on four public datasets and demonstrate that the attention network is efficient in different AES implementations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا