ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum Computational Advantage via 60-Qubit 24-Cycle Random Circuit Sampling

135   0   0.0 ( 0 )
 نشر من قبل Heliang Huang
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

To ensure a long-term quantum computational advantage, the quantum hardware should be upgraded to withstand the competition of continuously improved classical algorithms and hardwares. Here, we demonstrate a superconducting quantum computing systems textit{Zuchongzhi} 2.1, which has 66 qubits in a two-dimensional array in a tunable coupler architecture. The readout fidelity of textit{Zuchongzhi} 2.1 is considerably improved to an average of 97.74%. The more powerful quantum processor enables us to achieve larger-scale random quantum circuit sampling, with a system scale of up to 60 qubits and 24 cycles. The achieved sampling task is about 6 orders of magnitude more difficult than that of Sycamore [Nature textbf{574}, 505 (2019)] in the classic simulation, and 3 orders of magnitude more difficult than the sampling task on textit{Zuchongzhi} 2.0 [arXiv:2106.14734 (2021)]. The time consumption of classically simulating random circuit sampling experiment using state-of-the-art classical algorithm and supercomputer is extended to tens of thousands of years (about $4.8times 10^4$ years), while textit{Zuchongzhi} 2.1 only takes about 4.2 hours, thereby significantly enhancing the quantum computational advantage.

قيم البحث

اقرأ أيضاً

Gaussian boson sampling exploits squeezed states to provide a highly efficient way to demonstrate quantum computational advantage. We perform experiments with 50 input single-mode squeezed states with high indistinguishability and squeezing parameter s, which are fed into a 100-mode ultralow-loss interferometer with full connectivity and random transformation, and sampled using 100 high-efficiency single-photon detectors. The whole optical set-up is phase-locked to maintain a high coherence between the superposition of all photon number states. We observe up to 76 output photon-clicks, which yield an output state space dimension of $10^{30}$ and a sampling rate that is $10^{14}$ faster than using the state-of-the-art simulation strategy and supercomputers. The obtained samples are validated against various hypotheses including using thermal states, distinguishable photons, and uniform distribution.
Motivated by the recent experimental demonstrations of quantum supremacy, proving the hardness of the output of random quantum circuits is an imperative near term goal. We prove under the complexity theoretical assumption of the non-collapse of the p olynomial hierarchy that approximating the output probabilities of random quantum circuits to within $exp(-Omega(mlog m))$ additive error is hard for any classical computer, where $m$ is the number of gates in the quantum computation. More precisely, we show that the above problem is $#mathsf{P}$-hard under $mathsf{BPP}^{mathsf{NP}}$ reduction. In the recent experiments, the quantum circuit has $n$-qubits and the architecture is a two-dimensional grid of size $sqrt{n}timessqrt{n}$. Indeed for constant depth circuits approximating the output probabilities to within $2^{-Omega(nlog{n})}$ is hard. For circuits of depth $log{n}$ or $sqrt{n}$ for which the anti-concentration property holds, approximating the output probabilities to within $2^{-Omega(nlog^2{n})}$ and $2^{-Omega(n^{3/2}log n)}$ is hard respectively. We made an effort to find the best proofs and proved these results from first principles, which do not use the standard techniques such as the Berlekamp--Welch algorithm, the usual Paturis lemma, and Rakhmanovs result.
198 - Ramis Movassagh 2018
One-parameter interpolations between any two unitary matrices (e.g., quantum gates) $U_1$ and $U_2$ along efficient paths contained in the unitary group are constructed. Motivated by applications, we propose the continuous unitary path $U(theta)$ obt ained from the QR-factorization [ U(theta)R(theta)=(1-theta)A+theta B, ] where $U_1 R_1=A$ and $U_2 R_2=B$ are the QR-factorizations of $A$ and $B$, and $U(theta)$ is a unitary for all $theta$ with $U(0)=U_1$ and $U(1)=U_2$. The QR-algorithm is modified to, instead of $U(theta)$, output a matrix whose columns are proportional to the corresponding columns of $U(theta)$ and whose entries are polynomial or rational functions of $theta$. By an extension of the Berlekamp-Welch algorithm we show that rational functions can be efficiently and exactly interpolated with respect to $theta$. We then construct probability distributions over unitaries that are arbitrarily close to the Haar measure. Demonstration of computational advantages of NISQ over classical computers is an imperative near-term goal, especially with the exuberant experimental frontier in academia and industry (e.g., IBM and Google). A candidate for quantum computational supremacy is Random Circuit Sampling (RCS), which is the task of sampling from the output distribution of a random circuit. The aforementioned mathematical results provide a new way of scrambling quantum circuits and are applied to prove that exact RCS is $#P$-Hard on average, which is a simpler alternative to Bouland et als. (Dis)Proving the quantum supremacy conjecture requires approximate average case hardness; this remains an open problem for all quantum supremacy proposals.
Photonics is a promising platform for demonstrating quantum computational supremacy (QCS) by convincingly outperforming the most powerful classical supercomputers on a well-defined computational task. Despite this promise, existing photonics proposal s and demonstrations face significant hurdles. Experimentally, current implementations of Gaussian boson sampling lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make significant progress in improving both the theoretical evidence and experimental prospects. On the theory side, we provide strong evidence for the hardness of Gaussian boson sampling, placing it on par with the strongest theoretical proposals for QCS. On the experimental side, we propose a new QCS architecture, high-dimensional Gaussian boson sampling, which is programmable and can be implemented with low loss rates using few optical components. We show that particular classical algorithms for simulating GBS are vastly outperformed by high-dimensional Gaussian boson sampling experiments at modest system sizes. This work thus opens the path to demonstrating QCS with programmable photonic processors.
105 - Yulin Wu , Wan-Su Bao , Sirui Cao 2021
Scaling up to a large number of qubits with high-precision control is essential in the demonstrations of quantum computational advantage to exponentially outpace the classical hardware and algorithmic improvements. Here, we develop a two-dimensional programmable superconducting quantum processor, textit{Zuchongzhi}, which is composed of 66 functional qubits in a tunable coupling architecture. To characterize the performance of the whole system, we perform random quantum circuits sampling for benchmarking, up to a system size of 56 qubits and 20 cycles. The computational cost of the classical simulation of this task is estimated to be 2-3 orders of magnitude higher than the previous work on 53-qubit Sycamore processor [Nature textbf{574}, 505 (2019)]. We estimate that the sampling task finished by textit{Zuchongzhi} in about 1.2 hours will take the most powerful supercomputer at least 8 years. Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time. The high-precision and programmable quantum computing platform opens a new door to explore novel many-body phenomena and implement complex quantum algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا