ترغب بنشر مسار تعليمي؟ اضغط هنا

EOP: An Encryption-Obfuscation Solution for Protecting PCBs Against Tampering and Reverse Engineering

59   0   0.0 ( 0 )
 نشر من قبل Zimu Guo
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

PCBs are the core components for the devices ranging from the consumer electronics to military applications. Due to the accessibility of the PCBs, they are vulnerable to the attacks such as probing, eavesdropping, and reverse engineering. In this paper, a solution named EOP is proposed to migrate these threats. EOP encrypts the inter-chip communications with the stream cipher. The encryption and decryption are driven by the dedicated clock modules. These modules guarantee the stream cipher is correctly synchronized and free from tampering. Additionally, EOP also incorporates the PCB-level obfuscation for protection against reverse engineering. EOP is designated to be accomplished by utilizing the COTS components. For the validation, EOP is implemented in a Zynq SoC based system. Both the normal operation and tampering detection performance are verified. The results show that EOP can deliver the data from one chip to another without any errors. It is proved to be sensitive to any active tampering attacks.



قيم البحث

اقرأ أيضاً

In contrast to software reverse engineering, there are hardly any tools available that support hardware reversing. Therefore, the reversing process is conducted by human analysts combining several complex semi-automated steps. However, countermeasure s against reversing are evaluated solely against mathematical models. Our research goal is the establishment of cognitive obfuscation based on the exploration of underlying psychological processes. We aim to identify problems which are hard to solve for human analysts and derive novel quantification metrics, thus enabling stronger obfuscation techniques.
49 - Aryan Verma 2021
With the Increasing use of Machine Learning in Android applications, more research and efforts are being put into developing better-performing machine learning algorithms with a vast amount of data. Along with machine learning for mobile phones, the threat of extraction of trained machine learning models from application packages (APK) through reverse engineering exists. Currently, there are ways to protect models in mobile applications such as name obfuscation, cloud deployment, last layer isolation. Still, they offer less security, and their implementation requires more effort. This paper gives an algorithm to protect trained machine learning models inside android applications with high security and low efforts to implement it. The algorithm ensures security by encrypting the model and real-time decrypting it with 256-bit Advanced Encryption Standard (AES) inside the running application. It works efficiently with big model files without interrupting the User interface (UI) Thread. As compared to other methods, it is fast, more secure, and involves fewer efforts. This algorithm provides the developers and researchers a way to secure their actions and making the results available to all without any concern.
Understanding the internals of Integrated Circuits (ICs), referred to as Hardware Reverse Engineering (HRE), is of interest to both legitimate and malicious parties. HRE is a complex process in which semi-automated steps are interwoven with human sen se-making processes. Currently, little is known about the technical and cognitive processes which determine the success of HRE. This paper performs an initial investigation on how reverse engineers solve problems, how manual and automated analysis methods interact, and which cognitive factors play a role. We present the results of an exploratory behavioral study with eight participants that was conducted after they had completed a 14-week training. We explored the validity of our findings by comparing them with the behavior (strategies applied and solution time) of an HRE expert. The participants were observed while solving a realistic HRE task. We tested cognitive abilities of our participants and collected large sets of behavioral data from log files. By comparing the least and most efficient reverse engineers, we were able to observe successful strategies. Moreover, our analyses suggest a phase model for reverse engineering, consisting of three phases. Our descriptive results further indicate that the cognitive factor Working Memory (WM) might play a role in efficiently solving HRE problems. Our exploratory study builds the foundation for future research in this topic and outlines ideas for designing cognitively difficult countermeasures (cognitive obfuscation) against HRE.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find tr ansferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
Recently, a number of existing blockchain systems have witnessed major bugs and vulnerabilities within smart contracts. Although the literature features a number of proposals for securing smart contracts, these proposals mostly focus on proving the c orrectness or absence of a certain type of vulnerability within a contract, but cannot protect deployed (legacy) contracts from being exploited. In this paper, we address this problem in the context of re-entrancy exploits and propose a novel smart contract security technology, dubbed Sereum (Secure Ethereum), which protects existing, deployed contracts against re-entrancy attacks in a backwards compatible way based on run-time monitoring and validation. Sereum does neither require any modification nor any semantic knowledge of existing contracts. By means of implementation and evaluation using the Ethereum blockchain, we show that Sereum covers the actual execution flow of a smart contract to accurately detect and prevent attacks with a false positive rate as small as 0.06% and with negligible run-time overhead. As a by-product, we develop three advanced re-entrancy attacks to demonstrate the limitations of existing offline vulnerability analysis tools.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا