No Arabic abstract
Reversible logic is experience renewed interest as we are approach the limits of CMOS technologies. While physical implementations of reversible gates have yet to materialize, it is safe to assume that they will rely on faulty individual components. In this work we present a present a method to provide fault tolerance to a reversible circuit based on invariant relationships.
When a computational task tolerates a relaxation of its specification or when an algorithm tolerates the effects of noise in its execution, hardware, programming languages, and system software can trade deviations from correct behavior for lower resource usage. We present, for the first time, a synthesis of research results on computing systems that only make as many errors as their users can tolerate, from across the disciplines of computer aided design of circuits, digital system design, computer architecture, programming languages, operating systems, and information theory. Rather than over-provisioning resources at each layer to avoid errors, it can be more efficient to exploit the masking of errors occurring at one layer which can prevent them from propagating to a higher layer. We survey tradeoffs for individual layers of computing systems from the circuit level to the operating system level and illustrate the potential benefits of end-to-end approaches using two illustrative examples. To tie together the survey, we present a consistent formalization of terminology, across the layers, which does not significantly deviate from the terminology traditionally used by research communities in their layer of focus.
In recent years reversible logic has been considered as an important issue for designing low power digital circuits. It has voluminous applications in the present rising nanotechnology such as DNA computing, Quantum Computing, low power VLSI and quantum dot automata. In this paper we have proposed optimized design of reversible sequential circuits in terms of number of gates, delay and hardware complexity. We have designed the latches with a new reversible gate and reduced the required number of gates, garbage outputs, and delay and hardware complexity. As the number of gates and garbage outputs increase the complexity of reversible circuits, this design will significantly enhance the performance. We have proposed reversible D-latch and JK latch which are better than the existing designs available in literature.
As DRAM technology continues to evolve towards smaller feature sizes and increased densities, faults in DRAM subsystem are becoming more severe. Current servers mostly use CHIPKILL based schemes to tolerate up-to one/two symbol errors per DRAM beat. Multi-symbol errors arising due to faults in multiple data buses and chips may not be detected by these schemes. In this paper, we introduce Single Symbol Correction Multiple Symbol Detection (SSCMSD) - a novel error handling scheme to correct single-symbol errors and detect multi-symbol errors. Our scheme makes use of a hash in combination with Error Correcting Code (ECC) to avoid silent data corruptions (SDCs). SSCMSD can also enhance the capability of detecting errors in address bits. We employ 32-bit CRC along with Reed-Solomon code to implement SSCMSD for a x4 based DDRx system. Our simulations show that the proposed scheme effectively prevents SDCs in the presence of multiple symbol errors. Our novel design enabled us to achieve this without introducing additional READ latency. Also, we need 19 chips per rank (storage overhead of 18.75 percent), 76 data bus-lines and additional hash-logic at the memory controller.
We propose the first Reversible Coherence Protocol (RCP), a new protocol designed from ground up that enables invisible speculative load. RCP takes a bold approach by including the speculative loads and merge/purge operation in the interface between processor and cache coherence, and allowing them to participate in the coherence protocol. It means, speculative load, ordinary load/store, and merge/purge can all affect the state of a given cache line. RCP is the first coherence protocol that enables the commit and squash of the speculative load among distributed cache components in a general memory hierarchy. RCP incurs an average slowdown of (3.0%,8.3%,7.4%) on (SPEC2006,SPEC2017,PARSEC), which is lower compared to (26.5%,12%,18.3%) in InvisiSpec and (3.2%,9.4%,24.2%) in CleanupSpec. The coherence traffic overhead is on average 46%, compared to 40% and 27% of InvisiSpec and CleanupSpec, respectively. Even with higher traffic overhead (~46%), the performance overhead of RCP is lower than InvisiSpec and comparable to CleanupSpec. It reveals a key advantage of RCP: the coherence actions triggered by the merge and purge operations are not in the critical path of the execution and can be performed in the cache hierarchy concurrently with processor execution
We study the impact of experimental imperfections in integrated photonic circuits. We discuss the emergence of a moderate biased error in path encoding, and investigate its correlation with properties of the optical paths. Our analysis connects and deepens previous studies in this direction, revealing potential issues for high-precision tests and optical implementations of machine learning.