ترغب بنشر مسار تعليمي؟ اضغط هنا

Networked entanglement is an essential component for a plethora of quantum computation and communication protocols. Direct transmission of quantum signals over long distances is prevented by fibre attenuation and the no-cloning theorem, motivating th e development of quantum repeaters, designed to purify entanglement, extending its range. Quantum repeaters have been demonstrated over short distances, but error-corrected, global repeater networks with high bandwidth require new technology. Here we show that error corrected quantum memories installed in cargo containers and carried by ship can provide a flexible connection between local networks, enabling low-latency, high-fidelity quantum communication across global distances at higher bandwidths than previously proposed. With demonstrations of technology with sufficient fidelity to enable topological error-correction, implementation of the quantum memories is within reach, and bandwidth increases with improvements in fabrication. Our approach to quantum networking avoids technological restrictions of repeater deployment, providing an alternate path to a worldwide Quantum Internet.
58 - Ashley M. Stephens 2014
Topological color codes defined by the 4.8.8 semiregular lattice feature geometrically local check operators and admit transversal implementation of the entire Clifford group, making them promising candidates for fault-tolerant quantum computation. R ecently, several efficient algorithms for decoding the syndrome of color codes were proposed. Here, we modify one of these algorithms to account for errors affecting the syndrome, applying it to the family of triangular 4.8.8 color codes encoding one logical qubit. For a three-dimensional bit-flip channel, we report a threshold error rate of 0.0208(1), compared with 0.0305(4) previously reported for an integer-program-based decoding algorithm. When we account for circuit details, this threshold is reduced to 0.00143(1) per gate, compared with 0.00672(1) per gate for the surface code under an identical noise model.
109 - Ashley M. Stephens 2013
The surface code is a promising candidate for fault-tolerant quantum computation, achieving a high threshold error rate with nearest-neighbor gates in two spatial dimensions. Here, through a series of numerical simulations, we investigate how the pre cise value of the threshold depends on the noise model, measurement circuits, and decoding algorithm. We observe thresholds between 0.502(1)% and 1.140(1)% per gate, values which are generally lower than previous estimates.
Quantum information can be protected from decoherence and other errors, but only if these errors are sufficiently rare. For quantum computation to become a scalable technology, practical schemes for quantum error correction that can tolerate realisti cally high error rates will be necessary. In some physical systems, errors may exhibit a characteristic structure that can be carefully exploited to improve the efficacy of error correction. Here, we describe a scheme for topological quantum error correction to protect quantum information from a dephasing-biased error model, where we combine a repetition code with a topological cluster state. We find that the scheme tolerates error rates of up to 1.37%-1.83% per gate, requiring only short-range interactions in a two-dimensional array.
Quantum information processing and its associated technologies has reached an interesting and timely stage in their development where many different experiments have been performed establishing the basic building blocks. The challenge moving forward is to scale up to larger sized quantum machines capable of performing tasks not possible today. This raises a number of interesting questions like: How big will these machines need to be? how many resources will they consume? This needs to be urgently addressed. Here we estimate the resources required to execute Shors factoring algorithm on a distributed atom-optics quantum computer architecture. We determine the runtime and requisite size of the quantum computer as a function of the problem size and physical error rate. Our results suggest that once experimental accuracy reaches levels below the fault-tolerant threshold, further optimisation of computational performance and resources is largely an issue of how the algorithm and circuits are implemented, rather than the physical quantum hardware
We present a comprehensive and self-contained simplified review of the quantum computing scheme of Phys. Rev. Lett. 98, 190504 (2007), which features a 2-D nearest neighbor coupled lattice of qubits, a threshold error rate approaching 1%, natural asy mmetric and adjustable strength error correction and low overhead arbitrarily long-range logical gates. These features make it by far the best and most practical quantum computing scheme devised to date. We restrict the discussion to direct manipulation of the surface code using the stabilizer formalism, both of which we also briefly review, to make the scheme accessible to a broad audience.
We present a layered hybrid-system approach to quantum communication that involves the distribution of a topological cluster state throughout a quantum network. Photon loss and other errors are suppressed by optical multiplexing and entanglement puri fication. The scheme is scalable to large distances, achieving an end-to-end rate of 1 kHz with around 50 qubits per node. We suggest a potentially suitable implementation of an individual node composed of erbium spins (single atom or ensemble) coupled via flux qubits to a microwave resonator, allowing for deterministic local gates, stable quantum memories, and emission of photons in the telecom regime.
In this paper we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilised to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large scale computer. Photons in this system are continually recycled back into the preparation network, allowing for a arbitrarily deep 3D cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high frequency, deterministic photon sources.
The development of a large scale quantum computer is a highly sought after goal of fundamental research and consequently a highly non-trivial problem. Scalability in quantum information processing is not just a problem of qubit manufacturing and cont rol but it crucially depends on the ability to adapt advanced techniques in quantum information theory, such as error correction, to the experimental restrictions of assembling qubit arrays into the millions. In this paper we introduce a feasible architectural design for large scale quantum computation in optical systems. We combine the recent developments in topological cluster state computation with the photonic module, a simple chip based device which can be used as a fundamental building block for a large scale computer. The integration of the topological cluster model with this comparatively simple operational element addresses many significant issues in scalable computing and leads to a promising modular architecture with complete integration of active error correction exhibiting high fault-tolerant thresholds.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا