ترغب بنشر مسار تعليمي؟ اضغط هنا

ShEF: Shielded Enclaves for Cloud FPGAs

118   0   0.0 ( 0 )
 نشر من قبل Mark Zhao
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

FPGAs are now used in public clouds to accelerate a wide range of applications, including many that operate on sensitive data such as financial and medical records. We present ShEF, a trusted execution environment (TEE) for cloud-based reconfigurable accelerators. ShEF is independent from CPU-based TEEs and allows secure execution under a threat model where the adversary can control all software running on the CPU connected to the FPGA, has physical access to the FPGA, and can compromise the FPGA interface logic of the cloud provider. ShEF provides a secure boot and remote attestation process that relies solely on existing FPGA mechanisms for root of trust. It also includes a Shield component that provides secure access to data while the accelerator is in use. The Shield is highly customizable and extensible, allowing users to craft a bespoke security solution that fits their accelerators memory access patterns, bandwidth, and security requirements at minimum performance and area overheads. We describe a prototype implementation of ShEF for existing cloud FPGAs and measure the performance benefits of customizable security using five accelerator designs.



قيم البحث

اقرأ أيضاً

Enclaves, such as those enabled by Intel SGX, offer a powerful hardware isolation primitive for application partitioning. To become universally usable on future commodity OSes, enclave designs should offer compatibility with existing software. In thi s paper, we draw attention to 5 design decisions in SGX that create incompatibility with existing software. These represent concrete starting points, we hope, for improvements in future TEEs. Further, while many prior works have offered partial forms of compatibility, we present the first attempt to offer binary compatibility with existing software on SGX. We present Ratel, a system that enables a dynamic binary translation engine inside SGX enclaves on Linux. Through the lens of Ratel, we expose the fundamental trade-offs between performance and complete mediation on the OS-enclave interface, which are rooted in the aforementioned 5 SGX design restrictions. We report on an extensive evaluation of Ratel on over 200 programs, including micro-benchmarks and real applications such as Linux utilities.
Enclaves, such as those enabled by Intel SGX, offer a hardware primitive for shielding user-level applications from the OS. While enclaves are a useful starting point, code running in the enclave requires additional checks whenever control or data is transferred to/from the untrusted OS. The enclave-OS interface on SGX, however, can be extremely large if we wish to run existing unmodified binaries inside enclaves. This paper presents Ratel, a dynamic binary translation engine running inside SGX enclaves on Linux. Ratel offers complete interposition, the ability to interpose on all executed instructions in the enclave and monitor all interactions with the OS. Instruction-level interposition offers a general foundation for implementing a large variety of inline security monitors in the future. We take a principled approach in explaining why complete interposition on SGX is challenging. We draw attention to 5 design decisions in SGX that create fundamental trade-offs between performance and ensuring complete interposition, and we explain how to resolve them in the favor of complete interposition. To illustrate the utility of the Ratel framework, we present the first attempt to offer binary compatibility with existing software on SGX. We report that Ratel offers binary compatibility with over 200 programs we tested, including micro-benchmarks and real applications such as Linux shell utilities. Runtimes for two programming languages, namely Python and R, tested with standard benchmarks work out-of-the-box on Ratel without any specialized handling.
In their continuous growth and penetration into new markets, Field Programmable Gate Arrays (FPGAs) have recently made their way into hardware acceleration of machine learning among other specialized compute-intensive services in cloud data centers, such as Amazon and Microsoft. To further maximize their utilization in the cloud, several academic works propose the spatial multi-tenant deployment model, where the FPGA fabric is simultaneously shared among mutually mistrusting clients. This is enabled by leveraging the partial reconfiguration property of FPGAs, which allows to split the FPGA fabric into several logically isolated regions and reconfigure the functionality of each region independently at runtime. In this paper, we survey industrial and academic deployment models of multi-tenant FPGAs in the cloud computing settings, and highlight their different adversary models and security guarantees, while shedding light on their fundamental shortcomings from a security standpoint. We further survey and classify existing academic works that demonstrate a new class of remotely exploitable physical attacks on multi-tenant FPGA devices, where these attacks are launched remotely by malicious clients sharing physical resources with victim users. Through investigating the problem of end-to-end multi-tenant FPGA deployment more comprehensively, we reveal how these attacks actually represent only one dimension of the problem, while various open security and privacy challenges remain unaddressed. We conclude with our insights and a call for future research to tackle these challenges.
The evolution of cloud applications into loosely-coupled microservices opens new opportunities for hardware accelerators to improve workload performance. Existing accelerator techniques for cloud sacrifice the consolidation benefits of microservices. This paper presents CloudiFi, a framework to deploy and compare accelerators as a cloud service. We evaluate our framework in the context of a financial workload and present early results indicating up to 485x gains in microservice response time.
460 - Yiren Zhao , Xitong Gao , Xuan Guo 2019
Modern deep Convolutional Neural Networks (CNNs) are computationally demanding, yet real applications often require high throughput and low latency. To help tackle these problems, we propose Tomato, a framework designed to automate the process of gen erating efficient CNN accelerators. The generated design is pipelined and each convolution layer uses different arithmetics at various precisions. Using Tomato, we showcase state-of-the-art multi-precision multi-arithmetic networks, including MobileNet-V1, running on FPGAs. To our knowledge, this is the first multi-precision multi-arithmetic auto-generation framework for CNNs. In software, Tomato fine-tunes pretrained networks to use a mixture of short powers-of-2 and fixed-point weights with a minimal loss in classification accuracy. The fine-tuned parameters are combined with the templated hardware designs to automatically produce efficient inference circuits in FPGAs. We demonstrate how our approach significantly reduces model sizes and computation complexities, and permits us to pack a complete ImageNet network onto a single FPGA without accessing off-chip memories for the first time. Furthermore, we show how Tomato produces implementations of networks with various sizes running on single or multiple FPGAs. To the best of our knowledge, our automatically generated accelerators outperform closest FPGA-based competitors by at least 2-4x for lantency and throughput; the generated accelerator runs ImageNet classification at a rate of more than 3000 frames per second.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا