ترغب بنشر مسار تعليمي؟ اضغط هنا

A Case for Reversible Coherence Protocol

367   0   0.0 ( 0 )
 نشر من قبل You Wu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose the first Reversible Coherence Protocol (RCP), a new protocol designed from ground up that enables invisible speculative load. RCP takes a bold approach by including the speculative loads and merge/purge operation in the interface between processor and cache coherence, and allowing them to participate in the coherence protocol. It means, speculative load, ordinary load/store, and merge/purge can all affect the state of a given cache line. RCP is the first coherence protocol that enables the commit and squash of the speculative load among distributed cache components in a general memory hierarchy. RCP incurs an average slowdown of (3.0%,8.3%,7.4%) on (SPEC2006,SPEC2017,PARSEC), which is lower compared to (26.5%,12%,18.3%) in InvisiSpec and (3.2%,9.4%,24.2%) in CleanupSpec. The coherence traffic overhead is on average 46%, compared to 40% and 27% of InvisiSpec and CleanupSpec, respectively. Even with higher traffic overhead (~46%), the performance overhead of RCP is lower than InvisiSpec and comparable to CleanupSpec. It reveals a key advantage of RCP: the coherence actions triggered by the merge and purge operations are not in the critical path of the execution and can be performed in the cache hierarchy concurrently with processor execution



قيم البحث

اقرأ أيضاً

The use of multi-chip modules (MCM) and/or multi-socket boards is the most suitable approach to increase the computation density of servers while keep chip yield attained. This paper introduces a new coherence protocol suitable, in terms of complexit y and scalability, for this class of systems. The proposal uses two complementary ideas: (1) A mechanism that dissociates complexity from performance by means of colored-token counting, (2) A construct that optimizes performance and cost by means of two functionally symmetrical modules working in the last level cache of each chip (D|F-LLC) and each memory controller (D|F-MEM). Each of these structures is divided into two parts: (2.1) The first one consists of a small loosely inclusive sparse directory where only the most actively shared data are tracked in the chip (D-LLC) from each memory controller (D-MEM) and, (2.2) The second is a d-left Counting Bloom Filter which stores approximate information about the blocks allocated, either inside the chip (F-LLC) or in the home memory controller (F-MEM). The coordinated work of both structures minimizes the coherence-related effects on the average memory latency perceived by the processor. Our proposal is able to improve on the performance of a HyperTransport-like coherence protocol by from 25%-to-60%.
Hardware specialization is becoming a key enabler of energyefficient performance. Future systems will be increasingly heterogeneous, integrating multiple specialized and programmable accelerators, each with different memory demands. Traditionally, co mmunication between accelerators has been inefficient, typically orchestrated through explicit DMA transfers between different address spaces. More recently, industry has proposed unified coherent memory which enables implicit data movement and more data reuse, but often these interfaces limit the coherence flexibility available to heterogeneous systems. This paper demonstrates the benefits of fine-grained coherence specialization for heterogeneous systems. We propose an architecture that enables low-complexity independent specialization of each individual coherence request in heterogeneous workloads by building upon a simple and flexible baseline coherence interface, Spandex. We then describe how to optimize individual memory requests to improve cache reuse and performance-critical memory latency in emerging heterogeneous workloads. Collectively, our techniques enable significant gains, reducing execution time by up to 61% or network traffic by up to 99% while adding minimal complexity to the Spandex protocol.
Formal analyses of incentives for compliance with network protocols often appeal to game-theoretic models and concepts. Applications of game-theoretic analysis to network security have generally been limited to highly stylized models, where simplifie d environments enable tractable study of key strategic variables. We propose a simulation-based approach to game-theoretic analysis of protocol compliance, for scenarios with large populations of agents and large policy spaces. We define a general procedure for systematically exploring a structured policy space, directed expressly to resolve the qualitative classification of equilibrium behavior as compliant or non-compliant. The techniques are illustrated and exercised through an extensive case study analyzing compliance incentives for introduction-based routing. We find that the benefits of complying with the protocol are particularly strong for nodes subject to attack, and the overall compliance level achieved in equilibrium, while not universal, is sufficient to support the desired security goals of the protocol.
85 - Luyi Kang , Yuqi Xue , Weiwei Jia 2021
In-storage computing with modern solid-state drives (SSDs) enables developers to offload programs from the host to the SSD. It has been proven to be an effective approach to alleviate the I/O bottleneck. To facilitate in-storage computing, many frame works have been proposed. However, few of them treat the in-storage security as the first citizen. Specifically, since modern SSD controllers do not have a trusted execution environment, an offloaded (malicious) program could steal, modify, and even destroy the data stored in the SSD. In this paper, we first investigate the attacks that could be conducted by offloaded in-storage programs. To defend against these attacks, we build a lightweight trusted execution environment, named IceClave for in-storage computing. IceClave enables security isolation between in-storage programs and flash management functions that include flash address translation, data access control, and garbage collection, with TrustZone extensions. IceClave also achieves security isolation between in-storage programs by enforcing memory integrity verification of in-storage DRAM with low overhead. To protect data loaded from flash chips, IceClave develops a lightweight data encryption/decryption mechanism in flash controllers. We develop IceClave with a full system simulator. We evaluate IceClave with a variety of data-intensive applications such as databases. Compared to state-of-the-art in-storage computing approaches, IceClave introduces only 7.6% performance overhead, while enforcing security isolation in the SSD controller with minimal hardware cost. IceClave still keeps the performance benefit of in-storage computing by delivering up to 2.31$times$ better performance than the conventional host-based trusted computing approach.
DRAM is the dominant main memory technology used in modern computing systems. Computing systems implement a memory controller that interfaces with DRAM via DRAM commands. DRAM executes the given commands using internal components (e.g., access transi stors, sense amplifiers) that are orchestrated by DRAM internal timings, which are fixed foreach DRAM command. Unfortunately, the use of fixed internal timings limits the types of operations that DRAM can perform and hinders the implementation of new functionalities and custom mechanisms that improve DRAM reliability, performance and energy. To overcome these limitations, we propose enabling programmable DRAM internal timings for controlling in-DRAM components. To this end, we design CODIC, a new low-cost DRAM substrate that enables fine-grained control over four previously fixed internal DRAM timings that are key to many DRAM operations. We implement CODIC with only minimal changes to the DRAM chip and the DDRx interface. To demonstrate the potential of CODIC, we propose two new CODIC-based security mechanisms that outperform state-of-the-art mechanisms in several ways: (1) a new DRAM Physical Unclonable Function (PUF) that is more robust and has significantly higher throughput than state-of-the-art DRAM PUFs, and (2) the first cold boot attack prevention mechanism that does not introduce any performance or energy overheads at runtime.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا