ترغب بنشر مسار تعليمي؟ اضغط هنا

Small, Highly Accurate Quantum Processor for Intermediate-Depth Quantum Simulations

58   0   0.0 ( 0 )
 نشر من قبل Poul S. Jessen
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Analog quantum simulation is widely considered a step on the path to fault tolerant quantum computation. If based on current noisy hardware, the accuracy of an analog simulator will degrade after just a few time steps, especially when simulating complex systems that are likely to exhibit quantum chaos. Here we describe a small, highly accurate quantum simulator and its use to run high fidelity simulations of three different model Hamiltonians for $>100$ time steps. While not scalable to exponentially large Hilbert spaces, this platform provides the accuracy and programmability required for systematic exploration of the interplay between dynamics, imperfections, and accuracy in quantum simulation.



قيم البحث

اقرأ أيضاً

Quantum control in large dimensional Hilbert spaces is essential for realizing the power of quantum information processing. For closed quantum systems the relevant input/output maps are unitary transformations, and the fundamental challenge becomes h ow to implement these with high fidelity in the presence of experimental imperfections and decoherence. For two-level systems (qubits) most aspects of unitary control are well understood, but for systems with Hilbert space dimension d>2 (qudits), many questions remain regarding the optimal design of control Hamiltonians and the feasibility of robust implementation. Here we show that arbitrary, randomly chosen unitary transformations can be efficiently designed and implemented in a large dimensional Hilbert space (d=16) associated with the electronic ground state of atomic 133Cs, achieving fidelities above 0.98 as measured by randomized benchmarking. Generalizing the concepts of inhomogeneous control and dynamical decoupling to d>2 systems, we further demonstrate that these qudit unitary maps can be made robust to both static and dynamic perturbations. Potential applications include improved fault-tolerance in universal quantum computation, nonclassical state preparation for high-precision metrology, implementation of quantum simulations, and the study of fundamental physics related to open quantum systems and quantum chaos.
Quantum algorithms for Noisy Intermediate-Scale Quantum (NISQ) machines have recently emerged as new promising routes towards demonstrating near-term quantum advantage (or supremacy) over classical systems. In these systems samples are typically draw n from probability distributions which --- under plausible complexity-theoretic conjectures --- cannot be efficiently generated classically. Rather than first define a physical system and then determine computational features of the output state, we ask the converse question: given direct access to the quantum state, what features of the generating system can we efficiently learn? In this work we introduce the Variational Quantum Unsampling (VQU) protocol, a nonlinear quantum neural network approach for verification and inference of near-term quantum circuits outputs. In our approach one can variationally train a quantum operation to unravel the action of an unknown unitary on a known input state; essentially learning the inverse of the black-box quantum dynamics. While the principle of our approach is platform independent, its implementation will depend on the unique architecture of a specific quantum processor. Here, we experimentally demonstrate the VQU protocol on a quantum photonic processor. Alongside quantum verification, our protocol has broad applications; including optimal quantum measurement and tomography, quantum sensing and imaging, and ansatz validation.
The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
105 - Yulin Wu , Wan-Su Bao , Sirui Cao 2021
Scaling up to a large number of qubits with high-precision control is essential in the demonstrations of quantum computational advantage to exponentially outpace the classical hardware and algorithmic improvements. Here, we develop a two-dimensional programmable superconducting quantum processor, textit{Zuchongzhi}, which is composed of 66 functional qubits in a tunable coupling architecture. To characterize the performance of the whole system, we perform random quantum circuits sampling for benchmarking, up to a system size of 56 qubits and 20 cycles. The computational cost of the classical simulation of this task is estimated to be 2-3 orders of magnitude higher than the previous work on 53-qubit Sycamore processor [Nature textbf{574}, 505 (2019)]. We estimate that the sampling task finished by textit{Zuchongzhi} in about 1.2 hours will take the most powerful supercomputer at least 8 years. Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time. The high-precision and programmable quantum computing platform opens a new door to explore novel many-body phenomena and implement complex quantum algorithms.
Solving finite-temperature properties of quantum many-body systems is generally challenging to classical computers due to their high computational complexities. In this article, we present experiments to demonstrate a hybrid quantum-classical simulat ion of thermal quantum states. By combining a classical probabilistic model and a 5-qubit programmable superconducting quantum processor, we prepare Gibbs states and excited states of Heisenberg XY and XXZ models with high fidelity and compute thermal properties including the variational free energy, energy, and entropy with a small statistical error. Our approach combines the advantage of classical probabilistic models for sampling and quantum co-processors for unitary transformations. We show that the approach is scalable in the number of qubits, and has a self-verifiable feature, revealing its potentials in solving large-scale quantum statistical mechanics problems on near-term intermediate-scale quantum computers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا