ترغب بنشر مسار تعليمي؟ اضغط هنا

Requirements for fault-tolerant factoring on an atom-optics quantum computer

144   0   0.0 ( 0 )
 نشر من قبل Simon Devitt Dr
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Quantum information processing and its associated technologies has reached an interesting and timely stage in their development where many different experiments have been performed establishing the basic building blocks. The challenge moving forward is to scale up to larger sized quantum machines capable of performing tasks not possible today. This raises a number of interesting questions like: How big will these machines need to be? how many resources will they consume? This needs to be urgently addressed. Here we estimate the resources required to execute Shors factoring algorithm on a distributed atom-optics quantum computer architecture. We determine the runtime and requisite size of the quantum computer as a function of the problem size and physical error rate. Our results suggest that once experimental accuracy reaches levels below the fault-tolerant threshold, further optimisation of computational performance and resources is largely an issue of how the algorithm and circuits are implemented, rather than the physical quantum hardware



قيم البحث

اقرأ أيضاً

150 - John M. Martinis 2015
Recent progress in quantum information has led to the start of several large national and industrial efforts to build a quantum computer. Researchers are now working to overcome many scientific and technological challenges. The programs biggest obsta cle, a potential showstopper for the entire effort, is the need for high-fidelity qubit operations in a scalable architecture. This challenge arises from the fundamental fragility of quantum information, which can only be overcome with quantum error correction. In a fault-tolerant quantum computer the qubits and their logic interactions must have errors below a threshold: scaling up with more and more qubits then brings the net error probability down to appropriate levels ~ $10^{-18}$ needed for running complex algorithms. Reducing error requires solving problems in physics, control, materials and fabrication, which differ for every implementation. I explain here the common key driver for continued improvement - the metrology of qubit errors.
Photonics is the platform of choice to build a modular, easy-to-network quantum computer operating at room temperature. However, no concrete architecture has been presented so far that exploits both the advantages of qubits encoded into states of lig ht and the modern tools for their generation. Here we propose such a design for a scalable and fault-tolerant photonic quantum computer informed by the latest developments in theory and technology. Central to our architecture is the generation and manipulation of three-dimensional hybrid resource states comprising both bosonic qubits and squeezed vacuum states. The proposal enables exploiting state-of-the-art procedures for the non-deterministic generation of bosonic qubits combined with the strengths of continuous-variable quantum computation, namely the implementation of Clifford gates using easy-to-generate squeezed states. Moreover, the architecture is based on two-dimensional integrated photonic chips used to produce a qubit cluster state in one temporal and two spatial dimensions. By reducing the experimental challenges as compared to existing architectures and by enabling room-temperature quantum computation, our design opens the door to scalable fabrication and operation, which may allow photonics to leap-frog other platforms on the path to a quantum computer with millions of qubits.
The scalability of photonic implementations of fault-tolerant quantum computing based on Gottesman-Kitaev-Preskill (GKP) qubits is injured by the requirements of inline squeezing and reconfigurability of the linear optical network. In this work we pr opose a topologically error-corrected architecture that does away with these elements at no cost - in fact, at an advantage - to state preparation overheads. Our computer consists of three modules: a 2D array of probabilistic sources of GKP states; a depth-four circuit of static beamsplitters, phase shifters, and single-time-step delay lines; and a 2D array of homodyne detectors. The symmetry of our proposed circuit allows us to combine the effects of finite squeezing and uniform photon loss within the noise model, resulting in more comprehensive threshold estimates. These jumps over both architectural and analytical hurdles considerably expedite the construction of a photonic quantum computer.
We optimize the area and latency of Shors factoring while simultaneously improving fault tolerance through: (1) balancing the use of ancilla generators, (2) aggressive optimization of error correction, and (3) tuning the core adder circuits. Our cust om CAD flow produces detailed layouts of the physical components and utilizes simulation to analyze circuits in terms of area, latency, and success probability. We introduce a metric, called ADCR, which is the probabilistic equivalent of the classic Area-Delay product. Our error correction optimization can reduce ADCR by an order of magnitude or more. Contrary to conventional wisdom, we show that the area of an optimized quantum circuit is not dominated exclusively by error correction. Further, our adder evaluation shows that quantum carry-lookahead adders (QCLA) beat ripple-carry adders in ADCR, despite being larger and more complex. We conclude with what we believe is one of most accurate estimates of the area and latency required for 1024-bit Shors factorization: 7659 mm$^{2}$ for the smallest circuit and $6 * 10^8$ seconds for the fastest circuit.
We analyze the requirements for fault-tolerant quantum computation with atom-atom gates based on cavity quantum electrodynamics (cQED) mediated by a photon with a finite pulse length. For short photon pulses, the distorted shape of the reflected puls es from the cQED system is a serious error source. We optimize the cQED system parameters to minimize the infidelity due to the shape distortion and the photon losses in a well-balanced manner for the fault-tolerant scheme using probabilistic gates [H. Goto and K. Ichimura, Phys. Rev. A 80, 040303(R) (2009)]. Our optimization greatly relaxes the requirements for fault-tolerant quantum computation in some parameter regions, compared with the conventional optimization method where only the photon loss is minimized without considering the shape distortion [H. Goto and K. Ichimura, Phys. Rev. A 82, 032311 (2010)]. Finally, we show that reducing the cavity length is an effective way to reduce the errors of this type of gate in the case of short photon pulses.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا