ترغب بنشر مسار تعليمي؟ اضغط هنا

Benchmarking is how the performance of a computing system is determined. Surprisingly, even for classical computers this is not a straightforward process. One must choose the appropriate benchmark and metrics to extract meaningful results. Different benchmarks test the system in different ways and each individual metric may or may not be of interest. Choosing the appropriate approach is tricky. The situation is even more open ended for quantum computers, where there is a wider range of hardware, fewer established guidelines, and additional complicating factors. Notably, quantum noise significantly impacts performance and is difficult to model accurately. Here, we discuss benchmarking of quantum computers from a computer architecture perspective and provide numerical simulations highlighting challenges which suggest caution.
Quantum computers, if fully realized, promise to be a revolutionary technology. As a result, quantum computing has become one of the hottest areas of research in the last few years. Much effort is being applied at all levels of the system stack, from the creation of quantum algorithms to the development of hardware devices. The quantum age appears to be arriving sooner rather than later as commercially useful small-to-medium sized machines have already been built. However, full-scale quantum computers, and the full-scale algorithms they would perform, remain out of reach for now. It is currently uncertain how the first such computer will be built. Many different technologies are competing to be the first scalable quantum computer.
A critical challenge for modern system design is meeting the overwhelming performance, storage, and communication bandwidth demand of emerging applications within a tightly bound power budget. As both the time and power, hence the energy, spent in da ta communication by far exceeds the energy spent in actual data generation (i.e., computation), (re)computing data can easily become cheaper than storing and retrieving (pre)computed data. Therefore, trading computation for communication can improve energy efficiency by minimizing the energy overhead incurred by data storage, retrieval, and communication. This paper hence provides a taxonomy for the computation vs. communication trade-off along with quantitative characterization.
Systematic checkpointing of the machine state makes restart of execution from a safe state possible upon detection of an error. The time and energy overhead of checkpointing, however, grows with the frequency of checkpointing. Amortizing this overhea d becomes especially challenging, considering the growth of expected error rates, as checkpointing frequency tends to increase with increasing error rates. Based on the observation that due to imbalanced technology scaling, recomputing a data value can be more energy efficient than retrieving (i.e., loading) a stored copy, this paper explores how recomputation of data values (which otherwise would be read from a checkpoint from memory or secondary storage) can reduce the machine state to be checkpointed, and thereby reduce the checkpointing overhead. Specifically, the resulting amnesic checkpointing framework AmnesiCHK can reduce the storage overhead by up to 23.91%; time overhead, by 11.92%; and energy overhead, by 12.53%, respectively, even in a relatively small scale system.
Based on the observation that application phases exhibit varying degrees of sensitivity to noise (i.e., accuracy loss) in computation during execution, this paper explores how Dynamic Precision Scaling (DPS) can maximize power efficiency by tailoring the precision of computation adaptively to temporal changes in algorithmic noise tolerance. DPS can decrease the arithmetic precision of noise-tolerant phases to result in power savings at the same operating speed (or faster execution within the same power budget), while keeping the overall loss in accuracy due to precision reduction bounded.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا