ترغب بنشر مسار تعليمي؟ اضغط هنا

Electromigration-Aware Architecture for Modern Microprocessors

183   0   0.0 ( 0 )
 نشر من قبل Freddy Gabbay
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Reliability is a fundamental requirement in any microprocessor to guarantee correct execution over its lifetime. The design rules related to reliability depend on the process technology being used and the expected operating conditions of the device. To meet reliability requirements, advanced process technologies (28 nm and below) impose highly challenging design rules. Such design-for-reliability rules have become a major burden on the flow of VLSI implementation because of the severe physical constraints they impose. This paper focuses on electromigration (EM), which is one of the major critical factors affecting semiconductor reliability. EM is the aging process of on-die wires and vias and is induced by excessive current flow that can damage wires and may also significantly impact the integrated-circuit clock frequency. EM exerts a comprehensive global effect on devices because it impacts wires that may reside inside the standard or custom logical cells, between logical cells, inside memory elements, and within wires that interconnect functional blocks. The design-implementation flow (synthesis and place-and-route) currently detects violations of EM-reliability rules and attempts to solve them. In contrast, this paper proposes a new approach to enhance these flows by using EM-aware architecture. Our results show that the proposed solution can relax EM design efforts in microprocessors and more than double microprocessor lifetime. This work demonstrates this proposed approach for modern microprocessors, although the principals and ideas can be adapted to other cases as well.



قيم البحث

اقرأ أيضاً

Reliability is a crucial requirement in any modern microprocessor to assure correct execution over its lifetime. As mission critical components are becoming common in commodity systems; e.g., control of autonomous cars, the demand for reliable proces sing has even further heightened. Latest process technologies even worsened the situation; thus, microprocessors design has become highly susceptible to reliability concerns. This paper examines asymmetric aging phenomenon, which is a major reliability concern in advanced process nodes. In this phenomenon, logical elements and memory cells suffer from unequal timing degradation over time and consequently introduce reliability concerns. So far, most studies approached asymmetric aging from circuit or physical design viewpoint, but these solutions were quite limited and suboptimal. In this paper we introduce an asymmetric aging aware micro-architecture that aims at reducing its impact. The study is mainly focused on the following subsystems: execution units, register files and the memory hierarchy. Our experiments indicate that the proposed solutions incur minimal overhead while significantly mitigating the asymmetric aging stress.
A modern GPU aims to simultaneously execute more warps for higher Thread-Level Parallelism (TLP) and performance. When generating many memory requests, however, warps contend for limited cache space and thrash cache, which in turn severely degrades p erformance. To reduce such cache thrashing, we may adopt cache locality-aware warp scheduling which gives higher execution priority to warps with higher potential of data locality. However, we observe that warps with high potential of data locality often incurs far more cache thrashing or interference than warps with low potential of data locality. Consequently, cache locality-aware warp scheduling may undesirably increase cache interference and/or unnecessarily decrease TLP. In this paper, we propose Cache Interference-Aware throughput-Oriented (CIAO) on-chip memory architecture and warp scheduling which exploit unused shared memory space and take insight opposite to cache locality-aware warp scheduling. Specifically, CIAO on-chip memory architecture can adaptively redirect memory requests of severely interfering warps to unused shared memory space to isolate memory requests of these interfering warps from those of interfered warps. If these interfering warps still incur severe cache interference, CIAO warp scheduling then begins to selectively throttle execution of these interfering warps. Our experiment shows that CIAO can offer 54% higher performance than prior cache locality-aware scheduling at a small chip cost.
160 - Nastaran Hajinazar 2021
There is an explosive growth in the size of the input and/or intermediate data used and generated by modern and emerging applications. Unfortunately, modern computing systems are not capable of handling large amounts of data efficiently. Major concep ts and components (e.g., the virtual memory system) and predominant execution models (e.g., the processor-centric execution model) used in almost all computing systems are designed without having modern applications overwhelming data demand in mind. As a result, accessing, moving, and processing large amounts of data faces important challenges in todays systems, making data a first-class concern and a prime performance and energy bottleneck in such systems. This thesis studies the root cause of inefficiency in modern computing systems when handling modern applications data demand, and aims to fundamentally address such inefficiencies, with a focus on two directions. First, we design SIMDRAM, an end-to-end processing-using-DRAM framework that aids the widespread adoption of processing-using-DRAM, a data-centric computation paradigm that improves the overall performance and efficiency of the system when computing large amounts of data by minimizing the cost of data movement and enabling computation where the data resides. Second, we introduce the Virtual Block Interface (VBI), a novel virtual memory framework that 1) eliminates the inefficiencies of the conventional virtual memory frameworks when handling the high memory demand in modern applications, and 2) is built from the ground up to understand, convey, and exploit data properties, to create opportunities for performance and efficiency improvements.
Generalized Sparse Matrix-Matrix Multiplication (SpGEMM) is a ubiquitous task in various engineering and scientific applications. However, inner product based SpGENN introduces redundant input fetches for mismatched nonzero operands, while outer prod uct based approach suffers from poor output locality due to numerous partial product matrices. Inefficiency in the reuse of either inputs or outputs data leads to extensive and expensive DRAM access. To address this problem, this paper proposes an efficient sparse matrix multiplication accelerator architecture, SpArch, which jointly optimizes the data locality for both input and output matrices. We first design a highly parallelized streaming-based merger to pipeline the multiply and merge stage of partial matrices so that partial matrices are merged on chip immediately after produced. We then propose a condensed matrix representation that reduces the number of partial matrices by three orders of magnitude and thus reduces DRAM access by 5.4x. We further develop a Huffman tree scheduler to improve the scalability of the merger for larger sparse matrices, which reduces the DRAM access by another 1.8x. We also resolve the increased input matrix read induced by the new representation using a row prefetcher with near-optimal buffer replacement policy, further reducing the DRAM access by 1.5x. Evaluated on 20 benchmarks, SpArch reduces the total DRAM access by 2.8x over previous state-of-the-art. On average, SpArch achieves 4x, 19x, 18x, 17x, 1285x speedup and 6x, 164x, 435x, 307x, 62x energy savings over OuterSPACE, MKL, cuSPARSE, CUSP, and ARM Armadillo, respectively.
Graph convolutional network (GCN) emerges as a promising direction to learn the inductive representation in graph data commonly used in widespread applications, such as E-commerce, social networks, and knowledge graphs. However, learning from graphs is non-trivial because of its mixed computation model involving both graph analytics and neural network computing. To this end, we decompose the GCN learning into two hierarchical paradigms: graph-level and node-level computing. Such a hierarchical paradigm facilitates the software and hardware accelerations for GCN learning. We propose a lightweight graph reordering methodology, incorporated with a GCN accelerator architecture that equips a customized cache design to fully utilize the graph-level data reuse. We also propose a mapping methodology aware of data reuse and task-level parallelism to handle various graphs inputs effectively. Results show that Rubik accelerator design improves energy efficiency by 26.3x to 1375.2x than GPU platforms across different datasets and GCN models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا