ترغب بنشر مسار تعليمي؟ اضغط هنا

Taming Process Variations in CNFET for Efficient Last Level Cache Design

134   0   0.0 ( 0 )
 نشر من قبل Cheng Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Carbon nanotube field-effect transistors (CNFET) emerge as a promising alternative to CMOS transistors for the much higher speed and energy efficiency, which makes the technology particularly suitable for building the energy-hungry last level cache (LLC). However, the process variations (PVs) in CNFET caused by the imperfect fabrication lead to large timing variation and the worst-case timing dramatically limits the LLC operation speed. Particularly, we observe that the CNFET-based cache latency distribution is closely related to the LLC layouts. For the two typical LLC layouts that have the CNT growth direction aligned to the cache way direction and cache set direction respectively, we proposed variation-aware set aligned (VASA) cache and variation-aware way aligned (VAWA) cache in combination with corresponding cache optimizations such as data shuffling and page mapping to enable low-latency cache for frequently used data. According to our experiments, the optimized LLC reduces the average access latency by 32% and 45% compared to the baseline designs on the two different CNFET layouts respectively while it improves the overall performance by 6% and 9% and reduces the energy consumption by 4% and 8% respectively. In addition, with both the architecture induced latency variation and PV incurred latency variation considered in a unified model, we extended the VAWA and VASA cache design for the CNFET-based NUCA and the proposed NUCA achieves both significant performance improvement and energy saving compared to the straightforward variation-aware NUCA.



قيم البحث

اقرأ أيضاً

Cache coherence protocols such as MESI that use writer-initiated invalidation have high complexity and sometimes have poor performance and energy usage, especially under false sharing. Such protocols require numerous transient states, a shared direct ory, and support for core-to-core communication, while also suffering under false sharing. An alternative to MESIs writer-initiated invalidation is self-invalidation, which achieves lower complexity than MESI but adds high performance costs or relies on programmer annotations or specific data access patterns. This paper presents Neat, a low-complexity, efficient cache coherence protocol. Neat uses self-invalidation, thus avoiding MESIs transient states, directory, and core-to-core communication requirements. Neat uses novel mechanisms that effectively avoid many unnecessary self-invalidations. An evaluation shows that Neat is simple and has lower verification complexity than the MESI protocol. Neat not only outperforms state-of-the-art self-invalidation protocols, but its performance and energy consumption are comparable to MESIs, and it outperforms MESI under false sharing.
Nanopore genome sequencing is the key to enabling personalized medicine, global food security, and virus surveillance. The state-of-the-art base-callers adopt deep neural networks (DNNs) to translate electrical signals generated by nanopore sequencer s to digital DNA symbols. A DNN-based base-caller consumes $44.5%$ of total execution time of a nanopore sequencing pipeline. However, it is difficult to quantize a base-caller and build a power-efficient processing-in-memory (PIM) to run the quantized base-caller. In this paper, we propose a novel algorithm/architecture co-designed PIM, Helix, to power-efficiently and accurately accelerate nanopore base-calling. From algorithm perspective, we present systematic error aware training to minimize the number of systematic errors in a quantized base-caller. From architecture perspective, we propose a low-power SOT-MRAM-based ADC array to process analog-to-digital conversion operations and improve power efficiency of prior DNN PIMs. Moreover, we revised a traditional NVM-based dot-product engine to accelerate CTC decoding operations, and create a SOT-MRAM binary comparator array to process read voting. Compared to state-of-the-art PIMs, Helix improves base-calling throughput by $6times$, throughput per Watt by $11.9times$ and per $mm^2$ by $7.5times$ without degrading base-calling accuracy.
While multi-GPU (MGPU) systems are extremely popular for compute-intensive workloads, several inefficiencies in the memory hierarchy and data movement result in a waste of GPU resources and difficulties in programming MGPU systems. First, due to the lack of hardware-level coherence, the MGPU programming model requires the programmer to replicate and repeatedly transfer data between the GPUs memory. This leads to inefficient use of precious GPU memory. Second, to maintain coherency across an MGPU system, transferring data using low-bandwidth and high-latency off-chip links leads to degradation in system performance. Third, since the programmer needs to manually maintain data coherence, the programming of an MGPU system to maximize its throughput is extremely challenging. To address the above issues, we propose a novel lightweight timestamp-based coherence protocol, HALCONE, for MGPU systems and modify the memory hierarchy of the GPUs to support physically shared memory. HALCONE replaces the Compute Unit (CU) level logical time counters with cache level logical time counters to reduce coherence traffic. Furthermore, HALCONE introduces a novel timestamp storage unit (TSU) with no additional performance overhead in the main memory to perform coherence actions. Our proposed HALCONE protocol maintains the data coherence in the memory hierarchy of the MGPU with minimal performance overhead (less than 1%). Using a set of standard MGPU benchmarks, we observe that a 4-GPU MGPU system with shared memory and HALCONE performs, on average, 4.6$times$ and 3$times$ better than a 4-GPU MGPU system with existing RDMA and with the recently proposed HMG coherence protocol, respectively. We demonstrate the scalability of HALCONE using different GPU counts (2, 4, 8, and 16) and different CU counts (32, 48, and 64 CUs per GPU) for 11 standard benchmarks.
87 - Dawen Xu , Cheng Chu , Cheng Liu 2021
Deformable convolution networks (DCNs) proposed to address the image recognition with geometric or photometric variations typically involve deformable convolution that convolves on arbitrary locations of input features. The locations change with diff erent inputs and induce considerable dynamic and irregular memory accesses which cannot be handled by classic neural network accelerators (NNAs). Moreover, bilinear interpolation (BLI) operation that is required to obtain deformed features in DCNs also cannot be deployed on existing NNAs directly. Although a general purposed processor (GPP) seated along with classic NNAs can process the deformable convolution, the processing on GPP can be extremely slow due to the lack of parallel computing capability. To address the problem, we develop a DCN accelerator on existing NNAs to support both the standard convolution and deformable convolution. Specifically, for the dynamic and irregular accesses in DCNs, we have both the input and output features divided into tiles and build a tile dependency table (TDT) to track the irregular tile dependency at runtime. With the TDT, we further develop an on-chip tile scheduler to handle the dynamic and irregular accesses efficiently. In addition, we propose a novel mapping strategy to enable parallel BLI processing on NNAs and apply layer fusion techniques for more energy-efficient DCN processing. According to our experiments, the proposed accelerator achieves orders of magnitude higher performance and energy efficiency compared to the typical computing architectures including ARM, ARM+TPU, and GPU with 6.6% chip area penalty to a classic NNA.
In recent years, machine intelligence (MI) applications have emerged as a major driver for the computing industry. Optimizing these workloads is important but complicated. As memory demands grow and data movement overheads increasingly limit performa nce, determining the best GPU caching policy to use for a diverse range of MI workloads represents one important challenge. To study this, we evaluate 17 MI applications and characterize their behaviors using a range of GPU caching strategies. In our evaluations, we find that the choice of caching policy in GPU caches involves multiple performance trade-offs and interactions, and there is no one-size-fits-all GPU caching policy for MI workloads. Based on detailed simulation results, we motivate and evaluate a set of cache optimizations that consistently match the performance of the best static GPU caching policies.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا