ترغب بنشر مسار تعليمي؟ اضغط هنا

Guaranteed energy-efficient bit reset in finite time

148   0   0.0 ( 0 )
 نشر من قبل Cormac Browne
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Landauers principle states that it costs at least kTln2 of work to reset one bit in the presence of a heat bath at temperature T. The bound of kTln2 is achieved in the unphysical infinite-time limit. Here we ask what is possible if one is restricted to finite-time protocols. We prove analytically that it is possible to reset a bit with a work cost close to kTln2 in a finite time. We construct an explicit protocol that achieves this, which involves changing the systems Hamiltonian avoiding quantum coherences, and thermalising. Using concepts and techniques pertaining to single-shot statistical mechanics, we further develop the limit on the work cost, proving that the heat dissipated is close to the minimal possible not just on average, but guaranteed with high confidence in every run. Moreover we exploit the protocol to design a quantum heat engine that works near the Carnot efficiency in finite time.

قيم البحث

اقرأ أيضاً

We consider how the energy cost of bit reset scales with the time duration of the protocol. Bit reset necessarily takes place in finite time, where there is an extra penalty on top of the quasistatic work cost derived by Landauer. This extra energy i s dissipated as heat in the computer, inducing a fundamental limit on the speed of irreversible computers. We formulate a hardware-independent expression for this limit. We derive a closed-form lower bound on the work penalty as a function of the time taken for the protocol and bit reset error. It holds for discrete as well as continuous systems, assuming only that the master equation respects detailed balance.
Deep neural network (DNN) accelerators received considerable attention in past years due to saved energy compared to mainstream hardware. Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, ca uses bit-level failures in the memory storing the quantized DNN weights. In this paper, we show that a combination of robust fixed-point quantization, weight clipping, and random bit error training (RandBET) improves robustness against random bit errors in (quantized) DNN weights significantly. This leads to high energy savings from both low-voltage operation as well as low-precision quantization. Our approach generalizes across operating voltages and accelerators, as demonstrated on bit errors from profiled SRAM arrays. We also discuss why weight clipping alone is already a quite effective way to achieve robustness against bit errors. Moreover, we specifically discuss the involved trade-offs regarding accuracy, robustness and precision: Without losing more than 1% in accuracy compared to a normally trained 8-bit DNN, we can reduce energy consumption on CIFAR-10 by 20%. Higher energy savings of, e.g., 30%, are possible at the cost of 2.5% accuracy, even for 4-bit DNNs.
Although qubit coherence times and gate fidelities are continuously improving, logical encoding is essential to achieve fault tolerance in quantum computing. In most encoding schemes, correcting or tracking errors throughout the computation is necess ary to implement a universal gate set without adding significant delays in the processor. Here we realize a classical control architecture for the fast extraction of errors based on multiple cycles of stabilizer measurements and subsequent correction. We demonstrate its application on a minimal bit-flip code with five transmon qubits, showing that real-time decoding and correction based on multiple stabilizers is superior in both speed and fidelity to repeated correction based on individual cycles. Furthermore, the encoded qubit can be rapidly measured, thus enabling conditional operations that rely on feed-forward, such as logical gates. This co-processing of classical and quantum information will be crucial in running a logical circuit at its full speed to outpace error accumulation.
What is the minimum time required to take the temperature? In this paper, we solve this question for any process where temperature is inferred by measuring a probe (the thermometer) weakly coupled to the sample of interest, so that the probes evoluti on is well described by a quantum Markovian master equation. Considering the most general control strategy on the probe (adaptive measurements, arbitrary control on the probes state and Hamiltonian), we provide bounds on the achievable measurement precision in a finite amount of time, and show that in many scenarios these fundamental limits can be saturated with a relatively simple experiment. We find that for a general class of sample-probe interactions the scaling of the measurement uncertainty is inversely proportional to the time of the process, a shot-noise like behaviour that arises due to the dissipative nature of thermometry. As a side result, we show that the Lamb shift induced by the probe-sample interaction can play a relevant role in thermometry, allowing for finite measurement resolution in the low-temperature regime (more precisely, the measurement uncertainty decays polynomially with the temperature as $Trightarrow 0$, in contrast to the usual exponential decay with $T^{-1}$). We illustrate these general results for (i) a qubit probe interacting with a bosonic sample, where the role of the Lamb shit is highlighted, and (ii) a collective superradiant coupling between a $N$-qubit probe and a sample, which enables a quadratic decay with $N^2$ of the measurement uncertainty.
We implement an efficient energy-minimization algorithm for finite-difference micromagnetics that proofs especially useful for the computation of hysteresis loops. Compared to results obtained by time integration of the Landau-Lifshitz-Gilbert equati on, a speedup of up to two orders of magnitude is gained. The method is implemented in a finite-difference code running on CPUs as well as GPUs. This setup enables us to compute accurate hysteresis loops of large systems with a reasonable computational effort. As a benchmark we solve the {mu}Mag Standard Problem #1 with a high spatial resolution and compare the results to the solution of the Landau-Lifshitz-Gilbert equation in terms of accuracy and computing time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا