ترغب بنشر مسار تعليمي؟ اضغط هنا

IceClave: A Trusted Execution Environment for In-Storage Computing

86   0   0.0 ( 0 )
 نشر من قبل Yuqi Xue
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In-storage computing with modern solid-state drives (SSDs) enables developers to offload programs from the host to the SSD. It has been proven to be an effective approach to alleviate the I/O bottleneck. To facilitate in-storage computing, many frameworks have been proposed. However, few of them treat the in-storage security as the first citizen. Specifically, since modern SSD controllers do not have a trusted execution environment, an offloaded (malicious) program could steal, modify, and even destroy the data stored in the SSD. In this paper, we first investigate the attacks that could be conducted by offloaded in-storage programs. To defend against these attacks, we build a lightweight trusted execution environment, named IceClave for in-storage computing. IceClave enables security isolation between in-storage programs and flash management functions that include flash address translation, data access control, and garbage collection, with TrustZone extensions. IceClave also achieves security isolation between in-storage programs by enforcing memory integrity verification of in-storage DRAM with low overhead. To protect data loaded from flash chips, IceClave develops a lightweight data encryption/decryption mechanism in flash controllers. We develop IceClave with a full system simulator. We evaluate IceClave with a variety of data-intensive applications such as databases. Compared to state-of-the-art in-storage computing approaches, IceClave introduces only 7.6% performance overhead, while enforcing security isolation in the SSD controller with minimal hardware cost. IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.31$times$ better performance than the conventional host-based trusted computing approach.



قيم البحث

اقرأ أيضاً

ARM TrustZone is the de-facto hardware TEE implementation on mobile devices like smartphones. As a vendor-centric TEE, TrustZone greatly overlooks the strong protection demands and requirements from the App developers. Several security solutions have been proposed to enable the TEE-assisted isolation in the Normal World of ARM, attempting to balance the security and usability. However, they are still not full-fledged in serving Apps needs. In this paper, we introduce LEAP, which is a lightweight App developer Centric TEE solution in the Normal World. LEAP offers the auto DevOps tool to help developers to prepare the codes running on it, enables isolated codes to execute in parallel and access peripheral (e.g. mobile GPUs) with ease, and dynamically manage system resources upon Apps requests. We implement the LEAP prototype on the off-the-shelf ARM platform without any hardware change. We perform the comprehensive analyses and experiments to demonstrate that LEAP is efficient in design, comprehensive in support, and convenient in adoption.
Trusted Execution Environments (TEEs) are used to protect sensitive data and run secure execution for security-critical applications, by providing an environment isolated from the rest of the system. However, over the last few years, TEEs have been p roven weak, as either TEEs built upon security-oriented hardware extensions (e.g., Arm TrustZone) or resorting to dedicated secure elements were exploited multiple times. In this project, we introduce Trusted Execution Environments On-Demand (TEEOD), a novel TEE design that leverages the programmable logic (PL) in the heterogeneous system on chips (SoC) as the secure execution environment. Unlike other TEE designs, TEEOD can provide high-bandwidth connections and physical on-chip isolation. We implemented a proof-of-concept (PoC) implementation targeting an Ultra96-V2 platform. The conducted evaluation demonstrated TEEOD can host up to 6 simultaneous enclaves with a resource usage per enclave of 7.0%, 3.8%, and 15.3% of the total LUTs, FFs, and BRAMS, respectively. To demonstrate the practicability of TEEOD in real-world applications, we successfully run a legacy open-source Bitcoin wallet.
319 - Hung Dang , Ee-Chien Chang 2019
Data privacy is unarguably of extreme importance. Nonetheless, there exist various daunting challenges to safe-guarding data privacy. These challenges stem from the fact that data owners have little control over their data once it has transgressed th eir local storage and been managed by third parties whose trustworthiness is questionable at times. Our work seeks to enhance data privacy by constructing a self-expiring data capsule. Sensitive data is encapsulated into a capsule which is associated with an access policy an expiring condition. The former indicates eligibility of functions that can access the data, and the latter dictates when the data should become inaccessible to anyone, including the previously eligible functions. Access to the data capsule, as well as its dismantling once the expiring condition is met, are governed by a committee of independent and mutually distrusting nodes. The pivotal contribution of our work is an integration of hardware primitive, state machine replication and threshold secret sharing in the design of the self-expiring data encapsulation framework. We implement the proposed framework in a system called TEEKAP. Our empirical experiments conducted on a realistic deployment setting with the access control committee spanning across four geographical regions reveal that TEEKAP can process access requests at scale with sub-second latency.
The use of trusted hardware has become a promising solution to enable privacy-preserving machine learning. In particular, users can upload their private data and models to a hardware-enforced trusted execution environment (e.g. an enclave in Intel SG X-enabled CPUs) and run machine learning tasks in it with confidentiality and integrity guaranteed. To improve performance, AI accelerators have been widely employed for modern machine learning tasks. However, how to protect privacy on an AI accelerator remains an open question. To address this question, we propose a solution for efficient privacy-preserving machine learning based on an unmodified trusted CPU and a customized trusted AI accelerator. We carefully leverage cryptographic primitives to establish trust and protect the channel between the CPU and the accelerator. As a case study, we demonstrate our solution based on the open-source versatile tensor accelerator. The result of evaluation shows that the proposed solution provides efficient privacy-preserving machine learning at a small design cost and moderate performance overhead.
This work presents Origami, which provides privacy-preserving inference for large deep neural network (DNN) models through a combination of enclave execution, cryptographic blinding, interspersed with accelerator-based computation. Origami partitions the ML model into multiple partitions. The first partition receives the encrypted user input within an SGX enclave. The enclave decrypts the input and then applies cryptographic blinding to the input data and the model parameters. Cryptographic blinding is a technique that adds noise to obfuscate data. Origami sends the obfuscated data for computation to an untrusted GPU/CPU. The blinding and de-blinding factors are kept private by the SGX enclave, thereby preventing any adversary from denoising the data, when the computation is offloaded to a GPU/CPU. The computed output is returned to the enclave, which decodes the computation on noisy data using the unblinding factors privately stored within SGX. This process may be repeated for each DNN layer, as has been done in prior work Slalom. However, the overhead of blinding and unblinding the data is a limiting factor to scalability. Origami relies on the empirical observation that the feature maps after the first several layers can not be used, even by a powerful conditional GAN adversary to reconstruct input. Hence, Origami dynamically switches to executing the rest of the DNN layers directly on an accelerator without needing any further cryptographic blinding intervention to preserve privacy. We empirically demonstrate that using Origami, a conditional GAN adversary, even with an unlimited inference budget, cannot reconstruct the input. We implement and demonstrate the performance gains of Origami using the VGG-16 and VGG-19 models. Compared to running the entire VGG-19 model within SGX, Origami inference improves the performance of private inference from 11x while using Slalom to 15.1x.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا