ترغب بنشر مسار تعليمي؟ اضغط هنا

Deterministically Deterring Timing Attacks in Deterland

50   0   0.0 ( 0 )
 نشر من قبل Bryan Ford
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The massive parallelism and resource sharing embodying todays cloud business model not only exacerbate the security challenge of timing channels, but also undermine the viability of defenses based on resource partitioning. We propose hypervisor-enforced timing mitigation to control timing channels in cloud environments. This approach closes reference clocks internal to the cloud by imposing a deterministic view of time on guest code, and uses timing mitigators to pace I/O and rate-limit potential information leakage to external observers. Our prototype hypervisor is the first system to mitigate timing-channel leakage across full-scale existing operating systems such as Linux and applications in arbitrary languages. Mitigation incurs a varying performance cost, depending on workload and tunable leakage-limiting parameters, but this cost may be justified for security-critical cloud applications and data.

قيم البحث

اقرأ أيضاً

Designing protocols and formulating convenient programming units of abstraction for sensor networks is challenging due to communication errors and platform constraints. This paper investigates properties and implementation reliability for a emph{loca l read-write} abstraction. Local read-write is inspired by the class of read-modify-write operations defined for shared-memory multiprocessor architectures. The class of read-modify-write operations is important in solving consensus and related synchronization problems for concurrency control. Local read-write is shown to be an atomic abstraction for synchronizing neighborhood states in sensor networks. The paper compares local read-write to similar lightweight operations in wireless sensor networks, such as read-all, write-all, and a transaction-based abstraction: for some optimistic scenarios, local read-write is a more efficient neighborhood operation. A partial implementation is described, which shows that three outcomes characterize operation response: success, failure, and cancel. A failure response indicates possible inconsistency for the operation result, which is the result of a timeout event at the operations initiator. The paper presents experimental results on operation performance with different timeout values and situations of no contention, with some tests also on various neighborhood sizes.
59 - Yuxin Ren , Vlad Nitu , Guyue Liu 2019
In the future, computing will be immersed in the world around us -- from augmented reality to autonomous vehicles to the Internet of Things. Many of these smart devices will offer services that respond in real time to their physical surroundings, req uiring complex processing with strict performance guarantees. Edge clouds promise a pervasive computational infrastructure a short network hop away from end devices, but todays operating systems are a poor fit to meet the goals of scalable isolation, dense multi-tenancy, and predictable performance required by these emerging applications. In this paper we present EdgeOS, a micro-kernel based operating system that meets these goals by blending recent advances in real-time systems and network function virtualization. EdgeOS introduces a Featherweight Process model that offers lightweight isolation and supports extreme scalability even under high churn. Our architecture provides efficient communication mechanisms, and low-overhead per-client isolation. To achieve high performance networking, EdgeOS employs kernel bypass paired with the isolation properties of Featherweight Processes. We have evaluated our EdgeOS prototype for running high scale network middleboxes using the Click software router and endpoint applications using memcached. EdgeOS reduces startup latency by 170X compared to Linux processes and over five orders of magnitude compared to containers, while providing three orders of magnitude latency improvement when running 300 to 1000 edge-cloud memcached instances on one server.
For modern flash-based SSDs, the performance overhead of internal data migrations is dominated by the data transfer time, not by the flash program time as in old SSDs. In order to mitigate the performance impact of data migrations, we propose rCopyba ck, a restricted version of copyback. Rcopyback works like the original copyback except that only n consecutive copybacks are allowed. By limiting the number of successive copybacks, it guarantees that no data reliability problem occurs when data is internally migrated using rCopyback. In order to take a full advantage of rCopyback, we developed a rCopyback-aware FTL, rcFTL, which intelligently decides whether rCopyback should be used or not by exploiting varying host workloads. Our evaluation results show that rcFTL can improve the overall I/O throughput by 54% on average over an existing FTL which does not use copybacks.
138 - Kan Zhong , Wenlin Cui , Youyou Lu 2021
Memory-intensive applications, such as in-memory databases, caching systems and key-value stores, are increasingly demanding larger main memory to fit their working sets. Conventional swapping can enlarge the memory capacity by paging out inactive pa ges to disks. However, the heavy I/O stack makes the traditional kernel-based swapping suffers from several critical performance issues. In this paper, we redesign the swapping system and propose LightSwap, an high-performance user-space swapping scheme that supports paging with both local SSDs and remote memories. First, to avoids kernel-involving, a novel page fault handling mechanism is proposed to handle page faults in user-space and further eliminates the heavy I/O stack with the help of user-space I/O drivers. Second, we co-design Lightswap with light weight thread (LWT) to improve system throughput and make it be transparent to user applications. Finally, we propose a try-catch framework in Lightswap to deal with paging errors which are exacerbated by the scaling in process technology. We implement Lightswap in our production-level system and evaluate it with YCSB workloads running on memcached. Results show that Ligthswap reduces the page faults handling latency by 3--5 times, and improves the throughput of memcached by more than 40% compared with the stat-of-art swapping systems.
Host-side page victimizations can easily overflow the SSD internal buffer, which interferes I/O services of diverse user applications thereby degrading user-level experiences. To address this, we propose FastDrain, a co-design of OS kernel and flash firmware to avoid the buffer overflow, caused by page victimizations. Specifically, FastDrain can detect a triggering point where a near-future page victimization introduces an overflow of the SSD internal buffer. Our new flash firmware then speculatively scrubs the buffer space to accommodate the requests caused by the page victimization. In parallel, our new OS kernel design controls the traffic of page victimizations by considering the target device buffer status, which can further reduce the risk of buffer overflow. To secure more buffer spaces, we also design a latency-aware FTL, which dumps the dirty data only to the fast flash pages. Our evaluation results reveal that FastDrain reduces the 99th response time of user applications by 84%, compared to a conventional system.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا