ترغب بنشر مسار تعليمي؟ اضغط هنا

Advanced Software Protection Now

71   0   0.0 ( 0 )
 نشر من قبل Carlos Sarraute
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Diego Bendersky




اسأل ChatGPT حول البحث

Software digital rights management is a pressing need for the software development industry which remains, as no practical solutions have been acclamaimed succesful by the industry. We introduce a novel software-protection method, fully implemented with todays technologies, that provides traitor tracing and license enforcement and requires no additional hardware nor inter-connectivity. Our work benefits from the use of secure triggers, a cryptographic primitive that is secure assuming the existence of an ind-cpa secure block cipher. Using our framework, developers may insert license checks and fingerprints, and obfuscate the code using secure triggers. As a result, this rises the cost that software analysis tools have detect and modify protection mechanisms. Thus rising the complexity of cracking this system.

قيم البحث

اقرأ أيضاً

Enforcing data protection and privacy rules within large data processing applications is becoming increasingly important, especially in the light of GDPR and similar regulatory frameworks. Most modern data processing happens on top of a distributed s torage layer, and securing this layer against accidental or malicious misuse is crucial to ensuring global privacy guarantees. However, the performance overhead and the additional complexity for this is often assumed to be significant -- in this work we describe a path forward that tackles both challenges. We propose Software-Defined Data Protection (SDP), an adoption of the Software-Defined Storage approach to non-performance aspects: a trusted controller translates company and application-specific policies to a set of rules deployed on the storage nodes. These, in turn, apply the rules at line-rate but do not take any decisions on their own. Such an approach decouples often changing policies from request-level enforcement and allows storage nodes to implement the latter more efficiently. Even though in-storage processing brings challenges, mainly because it can jeopardize line-rate processing, we argue that todays Smart Storage solutions can already implement the required functionality, thanks to the separation of concerns introduced by SDP. We highlight the challenges that remain, especially that of trusting the storage nodes. These need to be tackled before we can reach widespread adoption in cloud environments.
Data exfiltration attacks have led to huge data breaches. Recently, the Equifax attack affected 147M users and a third-party library - Apache Struts - was alleged to be responsible for it. These attacks often exploit the fact that sensitive data are stored unencrypted in process memory and can be accessed by any function executing within the same process, including untrusted third-party library functions. This paper presents StackVault, a kernel-based system to prevent sensitive stack-based data from being accessed in an unauthorized manner by intra-process functions. Stack-based data includes data on stack as well as data pointed to by pointer variables on stack. StackVault consists of three components: (1) a set of programming APIs to allow users to specify which data needs to be protected, (2) a kernel module which uses unforgeable function identities to reliably carry out the sensitive data protection, and (3) an LLVM compiler extension that enables transparent placement of stack protection operations. The StackVault system automatically enforces stack protection through spatial and temporal access monitoring and control over both sensitive stack data and untrusted functions. We implemented StackVault and evaluated it using a number of popular real-world applications, including gRPC. The results show that StackVault is effective and efficient, incurring only up to 2.4% runtime overhead.
139 - Han Qiu , Yi Zeng , Qinkai Zheng 2020
Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversarial Examples (AEs). A large amount of efforts have been spent to launch and heat the arms race between the attackers and defenders. Recently, advanced gradient-based attack techni ques were proposed (e.g., BPDA and EOT), which have defeated a considerable number of existing defense methods. Up to today, there are still no satisfactory solutions that can effectively and efficiently defend against those attacks. In this paper, we make a steady step towards mitigating those advanced gradient-based attacks with two major contributions. First, we perform an in-depth analysis about the root causes of those attacks, and propose four properties that can break the fundamental assumptions of those attacks. Second, we identify a set of operations that can meet those properties. By integrating these operations, we design two preprocessing functions that can invalidate these powerful attacks. Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat 11 state-of-the-art defense solutions published in top-tier conferences over the past 2 years. The defender can employ our solutions to constrain the attack success rate below 7% for the strongest attacks even the adversary has spent dozens of GPU hours.
Privacy protection is an important research area, which is especially critical in this big data era. To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relat ion provides a great amount of information and can be used in other scenarios. In this paper, we propose the mapping distortion based protection (MDP) and its augmentation-based extension (AugMDP) to protect the data privacy by modifying the original dataset. In the modified dataset generated by MDP, the image and its label are not consistent ($e.g.$, a cat-like image is labeled as the dog), whereas the DNNs trained on it can still achieve good performance on benign testing set. As such, this method can protect privacy when the dataset is leaked. Extensive experiments are conducted, which verify the effectiveness and feasibility of our method. The code for reproducing main results is available at url{https://github.com/PerdonLiu/Visual-Privacy-Protection-via-Mapping-Distortion}.
Quantum copy protection uses the unclonability of quantum states to construct quantum software that provably cannot be pirated. Copy protection would be immensely useful, but unfortunately little is known about how to achieve it in general. In this w ork, we make progress on this goal, by giving the following results: - We show how to copy protect any program that cannot be learned from its input/output behavior, relative to a classical oracle. This improves on Aaronson [CCC09], which achieves the same relative to a quantum oracle. By instantiating the oracle with post-quantum candidate obfuscation schemes, we obtain a heuristic construction of copy protection. -We show, roughly, that any program which can be watermarked can be copy detected, a weaker version of copy protection that does not prevent copying, but guarantees that any copying can be detected. Our scheme relies on the security of the assumed watermarking, plus the assumed existence of public key quantum money. Our construction is general, applicable to many recent watermarking schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا