Do you want to publish a course? Click here

Extending the computational reach of a noisy superconducting quantum processor

84   0   0.0 ( 0 )
 Added by Abhinav Kandala
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Quantum computation, a completely different paradigm of computing, benefits from theoretically proven speed-ups for certain problems and opens up the possibility of exactly studying the properties of quantum systems. Yet, because of the inherent fragile nature of the physical computing elements, qubits, achieving quantum advantages over classical computation requires extremely low error rates for qubit operations as well as a significant overhead of physical qubits, in order to realize fault-tolerance via quantum error correction. However, recent theoretical work has shown that the accuracy of computation based off expectation values of quantum observables can be enhanced through an extrapolation of results from a collection of varying noisy experiments. Here, we demonstrate this error mitigation protocol on a superconducting quantum processor, enhancing its computational capability, with no additional hardware modifications. We apply the protocol to mitigate errors on canonical single- and two-qubit experiments and then extend its application to the variational optimization of Hamiltonians for quantum chemistry and magnetism. We effectively demonstrate that the suppression of incoherent errors helps unearth otherwise inaccessible accuracies to the variational solutions using our noisy processor. These results demonstrate that error mitigation techniques will be critical to significantly enhance the capabilities of near-term quantum computing hardware.



rate research

Read More

105 - Yulin Wu , Wan-Su Bao , Sirui Cao 2021
Scaling up to a large number of qubits with high-precision control is essential in the demonstrations of quantum computational advantage to exponentially outpace the classical hardware and algorithmic improvements. Here, we develop a two-dimensional programmable superconducting quantum processor, textit{Zuchongzhi}, which is composed of 66 functional qubits in a tunable coupling architecture. To characterize the performance of the whole system, we perform random quantum circuits sampling for benchmarking, up to a system size of 56 qubits and 20 cycles. The computational cost of the classical simulation of this task is estimated to be 2-3 orders of magnitude higher than the previous work on 53-qubit Sycamore processor [Nature textbf{574}, 505 (2019)]. We estimate that the sampling task finished by textit{Zuchongzhi} in about 1.2 hours will take the most powerful supercomputer at least 8 years. Our work establishes an unambiguous quantum computational advantage that is infeasible for classical computation in a reasonable amount of time. The high-precision and programmable quantum computing platform opens a new door to explore novel many-body phenomena and implement complex quantum algorithms.
The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
As progress is made towards the first generation of error-corrected quantum computers, careful characterization of a processors noise environment will be crucial to designing tailored, low-overhead error correction protocols. While standard coherence metrics and characterization protocols such as T1 and T2, process tomography, and randomized benchmarking are now ubiquitous, these techniques provide only partial information about the dynamic multi-qubit loss channels responsible for processor errors, which can be described more fully by a Lindblad operator in the master equation formalism. Here, we introduce and experimentally demonstrate Lindblad Tomography, a hardware-agnostic characterization protocol for tomographically reconstructing the Hamiltonian and Lindblad operators of a quantum channel from an ensemble of time-domain measurements. Performing Lindblad Tomography on a small superconducting quantum processor, we show that this technique characterizes and accounts for state-preparation and measurement (SPAM) errors and allows one to place strong bounds on the degree of non-Markovianity in the channels of interest. Comparing the results of single- and two-qubit measurements on a superconducting quantum processor, we demonstrate that Lindblad Tomography can also be used to identify and quantify sources of crosstalk on quantum processors, such as the presence of always-on qubit-qubit interactions.
The required precision to perform quantum simulations beyond the capabilities of classical computers imposes major experimental and theoretical challenges. Here, we develop a characterization technique to benchmark the implementation precision of a specific quantum simulation task. We infer all parameters of the bosonic Hamiltonian that governs the dynamics of excitations in a two-dimensional grid of nearest-neighbour coupled superconducting qubits. We devise a robust algorithm for identification of Hamiltonian parameters from measured times series of the expectation values of single-mode canonical coordinates. Using super-resolution and denoising methods, we first extract eigenfrequencies of the governing Hamiltonian from the complex time domain measurement; next, we recover the eigenvectors of the Hamiltonian via constrained manifold optimization over the orthogonal group. For five and six coupled qubits, we identify Hamiltonian parameters with sub-MHz precision and construct a spatial implementation error map for a grid of 27 qubits. Our approach enables us to distinguish and quantify the effects of state preparation and measurement errors and show that they are the dominant sources of errors in the implementation. Our results quantify the implementation accuracy of analog dynamics and introduce a diagnostic toolkit for understanding, calibrating, and improving analog quantum processors.
We present a quantum kernel method for high-dimensional data analysis using Googles universal quantum processor, Sycamore. This method is successfully applied to the cosmological benchmark of supernova classification using real spectral features with no dimensionality reduction and without vanishing kernel elements. Instead of using a synthetic dataset of low dimension or pre-processing the data with a classical machine learning algorithm to reduce the data dimension, this experiment demonstrates that machine learning with real, high dimensional data is possible using a quantum processor; but it requires careful attention to shot statistics and mean kernel element size when constructing a circuit ansatz. Our experiment utilizes 17 qubits to classify 67 dimensional data - significantly higher dimensionality than the largest prior quantum kernel experiments - resulting in classification accuracy that is competitive with noiseless simulation and comparable classical techniques.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا