ترغب بنشر مسار تعليمي؟ اضغط هنا

Thread-Based Obfuscation through Control-Flow Mangling

103   0   0.0 ( 0 )
 نشر من قبل Rasha Salah Omar
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The increasing use of cloud computing and remote execution have made program security especially important. Code obfuscation has been proposed to make the understanding of programs more complicated to attackers. In this paper, we exploit multi-core processing to substantially increase the complexity of programs, making reverse engineering more complicated. We propose a novel method that automatically partitions any serial thread into an arbitrary number of parallel threads, at the basic-block level. The method generates new control-flow graphs, preserving the blocks serial successor relations and guaranteeing that one basic-block is active at a time using guards. The method generates m^n different combinations for m threads and n basic-blocks, significantly complicating the execution state. We provide a correctness proof for the algorithm and implement the algorithm in the LLVM compilation framework.



قيم البحث

اقرأ أيضاً

As machine learning becomes a practice and commodity, numerous cloud-based services and frameworks are provided to help customers develop and deploy machine learning applications. While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties. Past work have shown that a malicious machine learning service provider or end user can easily extract critical information about the training samples, from the model parameters or even just model outputs. In this paper, we propose a novel and generic methodology to preserve the privacy of training data in machine learning applications. Specifically we introduce an obfuscate function and apply it to the training data before feeding them to the model training task. This function adds random noise to existing samples, or augments the dataset with new samples. By doing so sensitive information about the properties of individual samples, or statistical properties of a group of samples, is hidden. Meanwhile the model trained from the obfuscated dataset can still achieve high accuracy. With this approach, the customers can safely disclose the data or models to third-party providers or end users without the need to worry about data privacy. Our experiments show that this approach can effective defeat four existing types of machine learning privacy attacks at negligible accuracy cost.
We add probabilistic features to basic thread algebra and its extensions with thread-service interaction and strategic interleaving. Here, threads represent the behaviours produced by instruction sequences under execution and services represent the b ehaviours exhibited by the components of execution environments of instruction sequences. In a paper concerned with probabilistic instruction sequences, we proposed several kinds of probabilistic instructions and gave an informal explanation for each of them. The probabilistic features added to the extension of basic thread algebra with thread-service interaction make it possible to give a formal explanation in terms of non-probabilistic instructions and probabilistic services. The probabilistic features added to the extensions of basic thread algebra with strategic interleaving make it possible to cover strategies corresponding to probabilistic scheduling algorithms.
116 - Pei Wang , Shuai Wang , Jiang Ming 2016
Program obfuscation is an important software protection technique that prevents attackers from revealing the programming logic and design of the software. We introduce translingual obfuscation, a new software obfuscation scheme which makes programs o bscure by misusing the unique features of certain programming languages. Translingual obfuscation translates part of a program from its original language to another language which has a different programming paradigm and execution model, thus increasing program complexity and impeding reverse engineering. In this paper, we investigate the feasibility and effectiveness of translingual obfuscation with Prolog, a logic programming language. We implement translingual obfuscation in a tool called BABEL, which can selectively translate C functions into Prolog predicates. By leveraging two important features of the Prolog language, i.e., unification and backtracking, BABEL obfuscates both the data layout and control flow of C programs, making them much more difficult to reverse engineer. Our experiments show that BABEL provides effective and stealthy software obfuscation, while the cost is only modest compared to one of the most popular commercial obfuscators on the market. With BABEL, we verified the feasibility of translingual obfuscation, which we consider to be a promising new direction for software obfuscation.
Due to its open-source nature, Android operating system has been the main target of attackers to exploit. Malware creators always perform different code obfuscations on their apps to hide malicious activities. Features extracted from these obfuscated samples through program analysis contain many useless and disguised features, which leads to many false negatives. To address the issue, in this paper, we demonstrate that obfuscation-resilient malware analysis can be achieved through contrastive learning. We take the Android malware classification as an example to demonstrate our analysis. The key insight behind our analysis is that contrastive learning can be used to reduce the difference introduced by obfuscation while amplifying the difference between malware and benign apps (or other types of malware). Based on the proposed analysis, we design a system that can achieve robust and interpretable classification of Android malware. To achieve robust classification, we perform contrastive learning on malware samples to learn an encoder that can automatically extract robust features from malware samples. To achieve interpretable classification, we transform the function call graph of a sample into an image by centrality analysis. Then the corresponding heatmaps are obtained by visualization techniques. These heatmaps can help users understand why the malware is classified as this family. We implement IFDroid and perform extensive evaluations on two widely used datasets. Experimental results show that IFDroid is superior to state-of-the-art Android malware familial classification systems. Moreover, IFDroid is capable of maintaining 98.2% true positive rate on classifying 8,112 obfuscated malware samples.
User privacy can be compromised by matching user data traces to records of their previous behavior. The matching of the statistical characteristics of traces to prior user behavior has been widely studied. However, an adversary can also identify a us er deterministically by searching data traces for a pattern that is unique to that user. Our goal is to thwart such an adversary by applying small artificial distortions to data traces such that each potentially identifying pattern is shared by a large number of users. Importantly, in contrast to statistical approaches, we develop data-independent algorithms that require no assumptions on the model by which the traces are generated. By relating the problem to a set of combinatorial questions on sequence construction, we are able to provide provable guarantees for our proposed constructions. We also introduce data-dependent approaches for the same problem. The algorithms are evaluated on synthetic data traces and on the Reality Mining Dataset to demonstrate their utility.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا