Do you want to publish a course? Click here

QUAC-TRNG: High-Throughput True Random Number Generation Using Quadruple Row Activation in Commodity DRAM Chips

176   0   0.0 ( 0 )
 Added by Ataberk Olgun
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

True random number generators (TRNG) sample random physical processes to create large amounts of random numbers for various use cases, including security-critical cryptographic primitives, scientific simulations, machine learning applications, and even recreational entertainment. Unfortunately, not every computing system is equipped with dedicated TRNG hardware, limiting the application space and security guarantees for such systems. To open the application space and enable security guarantees for the overwhelming majority of computing systems that do not necessarily have dedicated TRNG hardware, we develop QUAC-TRNG. QUAC-TRNG exploits the new observation that a carefully-engineered sequence of DRAM commands activates four consecutive DRAM rows in rapid succession. This QUadruple ACtivation (QUAC) causes the bitline sense amplifiers to non-deterministically converge to random values when we activate four rows that store conflicting data because the net deviation in bitline voltage fails to meet reliable sensing margins. We experimentally demonstrate that QUAC reliably generates random values across 136 commodity DDR4 DRAM chips from one major DRAM manufacturer. We describe how to develop an effective TRNG (QUAC-TRNG) based on QUAC. We evaluate the quality of our TRNG using NIST STS and find that QUAC-TRNG successfully passes each test. Our experimental evaluations show that QUAC-TRNG generates true random numbers with a throughput of 3.44 Gb/s (per DRAM channel), outperforming the state-of-the-art DRAM-based TRNG by 15.08x and 1.41x for basic and throughput-optimiz



rate research

Read More

Recent advances in predictive data analytics and ever growing digitalization and connectivity with explosive expansions in industrial and consumer Internet-of-Things (IoT) has raised significant concerns about security of peoples identities and data. It has created close to ideal environment for adversaries in terms of the amount of data that could be used for modeling and also greater accessibility for side-channel analysis of security primitives and random number generators. Random number generators (RNGs) are at the core of most security applications. Therefore, a secure and trustworthy source of randomness is required to be found. Here, we present a differential circuit for harvesting one of the most stochastic phenomenon in solid-state physics, random telegraphic noise (RTN), that is designed to demonstrate significantly lower sensitivities to other sources of noises, radiation and temperature fluctuations. We use RTN in amorphous SrTiO3-based resistive memories to evaluate the proposed true random number generator (TRNG). Successful evaluation on conventional true randomness tests (NIST tests) has been shown. Robustness against using predictive machine learning and side-channel attacks have also been demonstrated in comparison with non-differential readouts methods.
This paper summarizes the idea of ChargeCache, which was published in HPCA 2016 [51], and examines the works significance and future potential. DRAM latency continues to be a critical bottleneck for system performance. In this work, we develop a low-cost mechanism, called ChargeCache, that enables faster access to recently-accessed rows in DRAM, with no modifications to DRAM chips. Our mechanism is based on the key observation that a recently-accessed row has more charge and thus the following access to the same row can be performed faster. To exploit this observation, we propose to track the addresses of recently-accessed rows in a table in the memory controller. If a later DRAM request hits in that table, the memory controller uses lower timing parameters, leading to reduced DRAM latency. Row addresses are removed from the table after a specified duration to ensure rows that have leaked too much charge are not accessed with lower latency. We evaluate ChargeCache on a wide variety of workloads and show that it provides significant performance and energy benefits for both single-core and multi-core systems.
The integration of quantum communication functions often requires dedicated opto-electronic components that do not bode well with the technology roadmaps of telecom systems. We investigate the capability of commercial coherent transceiver sub-systems to support quantum random number generation next to classical data transmission, and demonstrate how the quantum entropy source based on vacuum fluctuations can be potentially converted into a true random number generator for this purpose. We discuss two possible implementations, building on a receiver- and a transmitter-centric architecture. In the first scheme, balanced homodyne broadband detection in a coherent intradyne receiver is exploited to measure the vacuum state at the input of a 90-degree hybrid. In our proof-of-principle demonstration, a clearance of >2 dB between optical and electrical noise is obtained over a wide bandwidth of more than 11 GHz. In the second scheme, we propose and evaluate the re-use of monitoring photodiodes of a polarization-multiplexed inphase/quadrature modulator for the same purpose. Time-interleaved random number generation is demonstrated for 10 Gbaud polarization-multiplexed quadrature phase shift keyed data transmission. The availability of detailed models will allow to calculate the extractable entropy and we accordingly show randomness extraction for our two proof-of-principle experiments, employing a two-universal strong extractor.
DRAM is the dominant main memory technology used in modern computing systems. Computing systems implement a memory controller that interfaces with DRAM via DRAM commands. DRAM executes the given commands using internal components (e.g., access transistors, sense amplifiers) that are orchestrated by DRAM internal timings, which are fixed foreach DRAM command. Unfortunately, the use of fixed internal timings limits the types of operations that DRAM can perform and hinders the implementation of new functionalities and custom mechanisms that improve DRAM reliability, performance and energy. To overcome these limitations, we propose enabling programmable DRAM internal timings for controlling in-DRAM components. To this end, we design CODIC, a new low-cost DRAM substrate that enables fine-grained control over four previously fixed internal DRAM timings that are key to many DRAM operations. We implement CODIC with only minimal changes to the DRAM chip and the DDRx interface. To demonstrate the potential of CODIC, we propose two new CODIC-based security mechanisms that outperform state-of-the-art mechanisms in several ways: (1) a new DRAM Physical Unclonable Function (PUF) that is more robust and has significantly higher throughput than state-of-the-art DRAM PUFs, and (2) the first cold boot attack prevention mechanism that does not introduce any performance or energy overheads at runtime.
This paper summarizes our work on experimental characterization and analysis of reduced-voltage operation in modern DRAM chips, which was published in SIGMETRICS 2017, and examines the works significance and future potential. We take a comprehensive approach to understanding and exploiting the latency and reliability characteristics of modern DRAM when the DRAM supply voltage is lowered below the nominal voltage level specified by DRAM standards. We perform an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three major DRAM vendors. We find that reducing the supply voltage below a certain point introduces bit errors in the data, and we comprehensively characterize the behavior of these errors. We discover that these errors can be avoided by increasing the latency of three major DRAM operations (activation, restoration, and precharge). We perform detailed DRAM circuit simulations to validate and explain our experimental findings. We also characterize the various relationships between reduced supply voltage and error locations, stored data patterns, DRAM temperature, and data retention. Based on our observations, we propose a new DRAM energy reduction mechanism, called Voltron. The key idea of Voltron is to use a performance model to determine by how much we can reduce the supply voltage without introducing errors and without exceeding a user-specified threshold for performance loss. Our evaluations show that Voltron reduces the average DRAM and system energy consumption by 10.5% and 7.3%, respectively, while limiting the average system performance loss to only 1.8%, for a variety of memory-intensive quad-core workloads. We also show that Voltron significantly outperforms prior dynamic voltage and frequency scaling mechanisms for DRAM.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا