No Arabic abstract
In the near-term, hybrid quantum-classical algorithms hold great potential for outperforming classical approaches. Understanding how these two computing paradigms work in tandem is critical for identifying areas where such hybrid algorithms could provide a quantum advantage. In this work, we study a QAOA-based quantum optimization algorithm by implementing the Variational Quantum Factoring (VQF) algorithm. We execute experimental demonstrations using a superconducting quantum processor and investigate the trade-off between quantum resources (number of qubits and circuit depth) and the probability that a given biprime is successfully factored. In our experiments, the integers 1099551473989, 3127, and 6557 are factored with 3, 4, and 5 qubits, respectively, using a QAOA ansatz with up to 8 layers and we are able to identify the optimal number of circuit layers for a given instance to maximize success probability. Furthermore, we demonstrate the impact of different noise sources on the performance of QAOA and reveal the coherent error caused by the residual ZZ-coupling between qubits as a dominant source of error in the superconducting quantum processor.
Solving finite-temperature properties of quantum many-body systems is generally challenging to classical computers due to their high computational complexities. In this article, we present experiments to demonstrate a hybrid quantum-classical simulation of thermal quantum states. By combining a classical probabilistic model and a 5-qubit programmable superconducting quantum processor, we prepare Gibbs states and excited states of Heisenberg XY and XXZ models with high fidelity and compute thermal properties including the variational free energy, energy, and entropy with a small statistical error. Our approach combines the advantage of classical probabilistic models for sampling and quantum co-processors for unitary transformations. We show that the approach is scalable in the number of qubits, and has a self-verifiable feature, revealing its potentials in solving large-scale quantum statistical mechanics problems on near-term intermediate-scale quantum computers.
Integer factorization has been one of the cornerstone applications of the field of quantum computing since the discovery of an efficient algorithm for factoring by Peter Shor. Unfortunately, factoring via Shors algorithm is well beyond the capabilities of todays noisy intermediate-scale quantum (NISQ) devices. In this work, we revisit the problem of factoring, developing an alternative to Shors algorithm, which employs established techniques to map the factoring problem to the ground state of an Ising Hamiltonian. The proposed variational quantum factoring (VQF) algorithm starts by simplifying equations over Boolean variables in a preprocessing step to reduce the number of qubits needed for the Hamiltonian. Then, it seeks an approximate ground state of the resulting Ising Hamiltonian by training variational circuits using the quantum approximate optimization algorithm (QAOA). We benchmark the VQF algorithm on various instances of factoring and present numerical results on its performance.
Quantum algorithms for Noisy Intermediate-Scale Quantum (NISQ) machines have recently emerged as new promising routes towards demonstrating near-term quantum advantage (or supremacy) over classical systems. In these systems samples are typically drawn from probability distributions which --- under plausible complexity-theoretic conjectures --- cannot be efficiently generated classically. Rather than first define a physical system and then determine computational features of the output state, we ask the converse question: given direct access to the quantum state, what features of the generating system can we efficiently learn? In this work we introduce the Variational Quantum Unsampling (VQU) protocol, a nonlinear quantum neural network approach for verification and inference of near-term quantum circuits outputs. In our approach one can variationally train a quantum operation to unravel the action of an unknown unitary on a known input state; essentially learning the inverse of the black-box quantum dynamics. While the principle of our approach is platform independent, its implementation will depend on the unique architecture of a specific quantum processor. Here, we experimentally demonstrate the VQU protocol on a quantum photonic processor. Alongside quantum verification, our protocol has broad applications; including optimal quantum measurement and tomography, quantum sensing and imaging, and ansatz validation.
As progress is made towards the first generation of error-corrected quantum computers, careful characterization of a processors noise environment will be crucial to designing tailored, low-overhead error correction protocols. While standard coherence metrics and characterization protocols such as T1 and T2, process tomography, and randomized benchmarking are now ubiquitous, these techniques provide only partial information about the dynamic multi-qubit loss channels responsible for processor errors, which can be described more fully by a Lindblad operator in the master equation formalism. Here, we introduce and experimentally demonstrate Lindblad Tomography, a hardware-agnostic characterization protocol for tomographically reconstructing the Hamiltonian and Lindblad operators of a quantum channel from an ensemble of time-domain measurements. Performing Lindblad Tomography on a small superconducting quantum processor, we show that this technique characterizes and accounts for state-preparation and measurement (SPAM) errors and allows one to place strong bounds on the degree of non-Markovianity in the channels of interest. Comparing the results of single- and two-qubit measurements on a superconducting quantum processor, we demonstrate that Lindblad Tomography can also be used to identify and quantify sources of crosstalk on quantum processors, such as the presence of always-on qubit-qubit interactions.
Even if Google AIs Sycamore processor is efficient for the particular task it has been designed for it fails to deliver universal computational capacity. Furthermore, even classical devices implementing transverse homoclinic orbits realize exponential speedups with respect to universal classical as well as quantum computations. Moreover, relative to the validity of quantum mechanics, there already exist quantum oracles which violate the Church-Turing thesis.