Do you want to publish a course? Click here

Cross-Platform Comparison of Arbitrary Quantum Computations

106   0   0.0 ( 0 )
 Added by Ze-Pei Cian
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

As we approach the era of quantum advantage, when quantum computers (QCs) can outperform any classical computer on particular tasks, there remains the difficult challenge of how to validate their performance. While algorithmic success can be easily verified in some instances such as number factoring or oracular algorithms, these approaches only provide pass/fail information for a single QC. On the other hand, a comparison between different QCs on the same arbitrary circuit provides a lower-bound for generic validation: a quantum computation is only as valid as the agreement between the results produced on different QCs. Such an approach is also at the heart of evaluating metrological standards such as disparate atomic clocks. In this paper, we report a cross-platform QC comparison using randomized and correlated measurements that results in a wealth of information on the QC systems. We execute several quantum circuits on widely different physical QC platforms and analyze the cross-platform fidelities.



rate research

Read More

As a variety of quantum computing models and platforms become available, methods for assessing and comparing the performance of these devices are of increasing interest and importance. Despite being built of the same fundamental computational unit, radically different approaches have emerged for characterizing the performance of qubits in gate-based and quantum annealing computers, limiting and complicating consistent cross-platform comparisons. To fill this gap, this work proposes a single-qubit protocol (Q-RBPN) for measuring some basic performance characteristics of individual qubits in both models of quantum computation. The proposed protocol scales to large quantum computers with thousands of qubits and provides insights into the distribution of qubit properties within a particular hardware device and across families of devices. The efficacy of the Q-RBPN protocol is demonstrated through the analysis of more than 300 gate-based qubits spanning eighteen machines and 2000 annealing-based qubits from one machine, revealing some unexpected differences in qubit performance. Overall, the proposed Q-RBPN protocol provides a new platform-agnostic tool for assessing the performance of a wide range of emerging quantum computing devices.
The widely held belief that BQP strictly contains BPP raises fundamental questions: Upcoming generations of quantum computers might already be too large to be simulated classically. Is it possible to experimentally test that these systems perform as they should, if we cannot efficiently compute predictions for their behavior? Vazirani has asked: If predicting Quantum Mechanical systems requires exponential resources, is QM a falsifiable theory? In cryptographic settings, an untrusted future company wants to sell a quantum computer or perform a delegated quantum computation. Can the customer be convinced of correctness without the ability to compare results to predictions? To answer these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard Interactive Proofs the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to few qubits. Our main theorem can be roughly stated as: Any language in BQP has a QPIP, and moreover, a fault tolerant one. We provide two proofs. The simpler one uses a new (possibly of independent interest) quantum authentication scheme (QAS) based on random Clifford elements. This QPIP however, is not fault tolerant. Our second protocol uses polynomial codes QAS due to BCGHS, combined with quantum fault tolerance and multiparty quantum computation techniques. A slight modification of our constructions makes the protocol blind: the quantum computation and input are unknown to the prover. After we have derived the results, we have learned that Broadbent at al. have independently derived universal blind quantum computation using completely different methods. Their construction implicitly implies similar implications.
195 - T. Huckle , K. Waldherr , 2012
The computation of the ground state (i.e. the eigenvector related to the smallest eigenvalue) is an important task in the simulation of quantum many-body systems. As the dimension of the underlying vector space grows exponentially in the number of particles, one has to consider appropriate subsets promising both convenient approximation properties and efficient computations. The variational ansatz for this numerical approach leads to the minimization of the Rayleigh quotient. The Alternating Least Squares technique is then applied to break down the eigenvector computation to problems of appropriate size, which can be solved by classical methods. Efficient computations require fast computation of the matrix-vector product and of the inner product of two decomposed vectors. To this end, both appropriate representations of vectors and efficient contraction schemes are needed. Here approaches from many-body quantum physics for one-dimensional and two-dimensional systems (Matrix Product States and Projected Entangled Pair States) are treated mathematically in terms of tensors. We give the definition of these concepts, bring some results concerning uniqueness and numerical stability and show how computations can be executed efficiently within these concepts. Based on this overview we present some modifications and generalizations of these concepts and show that they still allow efficient computations such as applicable contraction schemes. In this context we consider the minimization of the Rayleigh quotient in terms of the {sc parafac} (CP) formalism, where we also allow different tensor partitions. This approach makes use of efficient contraction schemes for the calculation of inner products in a way that can easily be extended to the mps format but also to higher dimensional problems.
The widely held belief that BQP strictly contains BPP raises fundamental questions: if we cannot efficiently compute predictions for the behavior of quantum systems, how can we test their behavior? In other words, is quantum mechanics falsifiable? In cryptographic settings, how can a customer of a future untrusted quantum computing company be convinced of the correctness of its quantum computations? To provide answers to these questions, we define Quantum Prover Interactive Proofs (QPIP). Whereas in standard interactive proofs the prover is computationally unbounded, here our prover is in BQP, representing a quantum computer. The verifier models our current computational capabilities: it is a BPP machine, with access to only a few qubits. Our main theorem states, roughly: Any language in BQP has a QPIP, which also hides the computation from the prover. We provide two proofs, one based on a quantum authentication scheme (QAS) relying on random Clifford rotations and the other based on a QAS which uses polynomial codes (BOCG+ 06), combined with secure multiparty computation methods. This is the journal version of work reported in 2008 (ABOE08) and presented in ICS 2010; here we have completed the details and made the proofs rigorous. Some of the proofs required major modifications and corrections. Notably, the claim that the polynomial QPIP is fault tolerant was removed. Similar results (with different protocols) were reported independently around the same time of the original version in BFK08. The initial independent works (ABOE08, BFK08) ignited a long line of research of blind verifiable quantum computation, which we survey here, along with connections to various cryptographic problems. Importantly, the problems of making the results fault tolerant as well as removing the need for quantum communication altogether remain open.
We describe a general methodology for enhancing the efficiency of adiabatic quantum computations (AQC). It consists of homotopically deforming the original Hamiltonian surface in a way that the redistribution of the Gaussian curvature weakens the effect of the anti-crossing, thus yielding the desired improvement. Our approach is not pertubative but instead is built on our previous global description of AQC in the language of Morse theory. Through the homotopy deformation we witness the birth and death of critical points whilst, in parallel, the Gauss-Bonnet theorem reshuffles the curvature around the changing set of critical points. Therefore, by creating enough critical points around the anti-crossing, the total curvature--which was initially centered at the original anti-crossing--gets redistributed around the new neighbouring critical points, which weakens its severity and so improves the speedup of the AQC. We illustrate this on two examples taken from the literature.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا