ترغب بنشر مسار تعليمي؟ اضغط هنا

Parallelising the Queries in Bucket Brigade Quantum RAM

56   0   0.0 ( 0 )
 نشر من قبل Alexandru Paler
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Quantum algorithms often use quantum RAMs (QRAM) for accessing information stored in a database-like manner. QRAMs have to be fast, resource efficient and fault-tolerant. The latter is often influenced by access speeds, because shorter times introduce less exposure of the stored information to noise. The total execution time of an algorithm depends on the QRAM access time which includes: 1) address translation time, and 2) effective query time. The bucket brigade QRAMs were proposed to achieve faster addressing at the cost of exponentially many ancillae. We illustrate a systematic method to significantly reduce the effective query time by using Clifford+T gate parallelism. The method does not introduce any ancillae qubits. Our parallelisation method is compatible with the surface code quantum error correction. We show that parallelisation is a result of advantageous Toffoli gate decomposition in terms of Clifford+T gates, and after addresses have been translated, we achieve theoretical $mathcal{O}(1)$ parallelism for the effective queries. We conclude that, in theory: 1) fault-tolerant bucket brigade quantum RAM queries can be performed approximately with the speed of classical RAM; 2) the exponentially many ancillae from the bucket brigade addressing scheme are a trade-off cost for achieving exponential query speedup compared to quantum read-only memories whose queries are sequential by design. The methods to compile, parallelise and analyse the presented QRAM circuits were implemented in software which is available online.

قيم البحث

اقرأ أيضاً

We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti, Lloyd, and Maccone [Phys. Rev. Lett. 100, 160501 (2008)]. Due to a result of Regev and Schiff [ICALP 08 pp. 773], we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order $o(2^{-n/2})$ (where $N=2^n$ is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion [Phys. Rev. Lett. 103, 150502 (2009)] or quantum machine learning [Phys. Rev. Lett. 113, 130503 (2014)] that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of active gates, since all components have to be actively error corrected.
The Quantum Internet is envisioned as the final stage of the quantum revolution, opening fundamentally new communications and computing capabilities, including the distributed quantum computing. But the Quantum Internet is governed by the laws of qua ntum mechanics. Phenomena with no counterpart in classical networks, such as no-cloning, quantum measurement, entanglement and teleporting, impose very challenging constraints for the network design. Specifically, classical network functionalities, ranging from error-control mechanisms to overhead-control strategies, are based on the assumption that classical information can be safely read and copied. But this assumption does not hold in the Quantum Internet. As a consequence, the design of the Quantum Internet requires a major network-paradigm shift to harness the quantum mechanics specificities. The goal of this work is to shed light on the challenges and the open problems of the Quantum Internet design. To this aim, we first introduce some basic knowledge of quantum mechanics, needed to understand the differences between a classical and a quantum network. Then, we introduce quantum teleportation as the key strategy for transmitting quantum information without physically transferring the particle that stores the quantum information or violating the principles of the quantum mechanics. Finally, the key research challenges to design quantum communication networks are described.
With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.
As current Noisy Intermediate Scale Quantum (NISQ) devices suffer from decoherence errors, any delay in the instruction execution of quantum control microarchitecture can lead to the loss of quantum information and incorrect computation results. Henc e, it is crucial for the control microarchitecture to issue quantum operations to the Quantum Processing Unit (QPU) in time. As in classical microarchitecture, parallelism in quantum programs needs to be exploited for speedup. However, three challenges emerge in the quantum scenario: 1) the quantum feedback control can introduce significant pipeline stall latency; 2) timing control is required for all quantum operations; 3) QPU requires a deterministic operation supply to prevent the accumulation of quantum errors. In this paper, we propose a novel control microarchitecture design to exploit Circuit Level Parallelism (CLP) and Quantum Operation Level Parallelism (QOLP). Firstly, we develop a Multiprocessor architecture to exploit CLP, which supports dynamic scheduling of different sub-circuits. This architecture can handle parallel feedback control and minimize the potential overhead that disrupts the timing control. Secondly, we propose a Quantum Superscalar approach that exploits QOLP by efficiently executing massive quantum instructions in parallel. Both methods issue quantum operations to QPU deterministically. In the benchmark test of a Shor syndrome measurement, a six-core implementation of our proposal achieves up to 2.59$times$ speedup compared with a single core. For various canonical quantum computing algorithms, our superscalar approach achieves an average of 4.04$times$ improvement over a baseline design. Finally, We perform a simultaneous randomized benchmarking (simRB) experiment on a real QPU using the proposed microarchitecture for validation.
We give an upper bound on the resources required for valuable quantum advantage in pricing derivatives. To do so, we give the first complete resource estimates for useful quantum derivative pricing, using autocallable and Target Accrual Redemption Fo rward (TARF) derivatives as benchmark use cases. We uncover blocking challenges in known approaches and introduce a new method for quantum derivative pricing - the re-parameterization method - that avoids them. This method combines pre-trained variational circuits with fault-tolerant quantum computing to dramatically reduce resource requirements. We find that the benchmark use cases we examine require 8k logical qubits and a T-depth of 54 million. We estimate that quantum advantage would require executing this program at the order of a second. While the resource requirements given here are out of reach of current systems, we hope they will provide a roadmap for further improvements in algorithms, implementations, and planned hardware architectures.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا