Do you want to publish a course? Click here

NISQ+: Boosting quantum computing power by approximating quantum error correction

314   0   0.0 ( 0 )
 Added by Adam Holmes
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Quantum computers are growing in size, and design decisions are being made now that attempt to squeeze more computation out of these machines. In this spirit, we design a method to boost the computational power of near-term quantum computers by adapting protocols used in quantum error correction to implement Approximate Quantum Error Correction (AQEC). By approximating fully-fledged error correction mechanisms, we can increase the compute volume (qubits $times$ gates, or Simple Quantum Volume (SQV)) of near-term machines. The crux of our design is a fast hardware decoder that can approximately decode detected error syndromes rapidly. Specifically, we demonstrate a proof-of-concept that approximate error decoding can be accomplished online in near-term quantum systems by designing and implementing a novel algorithm in Single-Flux Quantum (SFQ) superconducting logic technology. This avoids a critical decoding backlog, hidden in all offline decoding schemes, that leads to idle time exponential in the number of T gates in a program. Our design utilizes one SFQ processing module per physical qubit. Employing state-of-the-art SFQ synthesis tools, we show that the circuit area, power, and latency are within the constraints of contemporary quantum system designs. Under pure dephasing error models, the proposed accelerator and AQEC solution is able to expand SQV by factors between 3,402 and 11,163 on expected near-term machines. The decoder achieves a $5%$ accuracy-threshold and pseudo-thresholds of $sim$ $5%, 4.75%, 4.5%,$ and $3.5%$ physical error-rates for code distances $3, 5, 7,$ and $9$. Decoding solutions are achieved in a maximum of $sim 20$ nanoseconds on the largest code distances studied. By avoiding the exponential idle time in offline decoders, we achieve a $10$x reduction in required code distances to achieve the same logical performance as alternative designs.



rate research

Read More

The accumulation of quantum phase in response to a signal is the central mechanism of quantum sensing, as such, loss of phase information presents a fundamental limitation. For this reason approaches to extend quantum coherence in the presence of noise are actively being explored. Here we experimentally protect a room-temperature hybrid spin register against environmental decoherence by performing repeated quantum error correction whilst maintaining sensitivity to signal fields. We use a long-lived nuclear spin to correct multiple phase errors on a sensitive electron spin in diamond and realize magnetic field sensing beyond the timescales set by natural decoherence. The universal extension of sensing time, robust to noise at any frequency, demonstrates the definitive advantage entangled multi-qubit systems provide for quantum sensing and offers an important complement to quantum control techniques. In particular, our work opens the door for detecting minute signals in the presence of high frequency noise, where standard protocols reach their limits.
218 - Kosuke Fukui , Akihisa Tomita , 2018
To implement fault-tolerant quantum computation with continuous variables, the Gottesman--Kitaev--Preskill (GKP) qubit has been recognized as an important technological element. We have proposed a method to reduce the required squeezing level to realize large scale quantum computation with the GKP qubit [Phys. Rev. X. {bf 8}, 021054 (2018)], harnessing the virtue of analog information in the GKP qubits. In the present work, to reduce the number of qubits required for large scale quantum computation, we propose the tracking quantum error correction, where the logical-qubit level quantum error correction is partially substituted by the single-qubit level quantum error correction. In the proposed method, the analog quantum error correction is utilized to make the performances of the single-qubit level quantum error correction almost identical to those of the logical-qubit level quantum error correction in a practical noise level. The numerical results show that the proposed tracking quantum error correction reduces the number of qubits during a quantum error correction process by the reduction rate $left{{2(n-1)times4^{l-1}-n+1}right}/({2n times 4^{l-1}})$ for $n$-cycles of the quantum error correction process using the Knills $C_{4}/C_{6}$ code with the concatenation level $l$. Hence, the proposed tracking quantum error correction has great advantage in reducing the required number of physical qubits, and will open a new way to bring up advantage of the GKP qubits in practical quantum computation.
86 - John Preskill 2018
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of todays classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing.
217 - Andrew S. Fletcher 2007
Quantum error correction (QEC) is an essential concept for any quantum information processing device. Typically, QEC is designed with minimal assumptions about the noise process; this generic assumption exacts a high cost in efficiency and performance. In physical systems, errors are not likely to be arbitrary; rather we will have reasonable models for the structure of quantum decoherence. We may choose quantum error correcting codes and recovery operations that specifically target the most likely errors. We present a convex optimization method to determine the optimal (in terms of average entanglement fidelity) recovery operation for a given channel, encoding, and information source. This is solvable via a semidefinite program (SDP). We present computational algorithms to generate near-optimal recovery operations structured to begin with a projective syndrome measurement. These structured operations are more computationally scalable than the SDP required for computing the optimal; we can thus numerically analyze longer codes. Using Lagrange duality, we bound the performance of the structured recovery operations and show that they are nearly optimal in many relevant cases. We present two classes of channel-adapted quantum error correcting codes specifically designed for the amplitude damping channel. These have significantly higher rates with shorter block lengths than corresponding generic quantum error correcting codes. Both classes are stabilizer codes, and have good fidelity performance with stabilizer recovery operations. The encoding, syndrome measurement, and syndrome recovery operations can all be implemented with Clifford group operations.
Based on the group structure of a unitary Lie algebra, a scheme is provided to systematically and exhaustively generate quantum error correction codes, including the additive and nonadditive codes. The syndromes in the process of error-correction distinguished by different orthogonal vector subspaces, the coset subspaces. Moreover, the generated codes can be classified into four types with respect to the spinors in the unitary Lie algebra and a chosen initial quantum state.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا