ترغب بنشر مسار تعليمي؟ اضغط هنا

Testing symmetry on quantum computers

82   0   0.0 ( 0 )
 نشر من قبل Mark Wilde
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms by using the variational approach to quantum algorithms, replacing the quantum prover with a variational circuit. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of the quantum algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.



قيم البحث

اقرأ أيضاً

The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measure s are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity, and we evaluate their performance using simulators of quantum computers. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as quantum combs). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate these algorithms by using a variational approach with parameterized quantum circuits and find that they converge well for the examples that we consider.
196 - Kevin Slagle 2021
We consider the hypothesis that quantum mechanics is not fundamental, but instead emerges from a theory with less computational power, such as classical mechanics. This hypothesis makes the prediction that quantum computers will not be capable of suf ficiently complex quantum computations. Utilizing this prediction, we outline a proposal to test for such a breakdown of quantum mechanics using near-term noisy intermediate-scale quantum (NISQ) computers. Our procedure involves simulating a non-Clifford random circuit, followed by its inverse, and then checking that the resulting state is the same as the initial state. We show that quantum mechanics predicts that the fidelity of this procedure decays exponentially with circuit depth (due to noise in NISQ computers). However, if quantum mechanics emerges from a theory with significantly less computational power, then we expect the fidelity to decay significantly more rapidly than the quantum mechanics prediction for sufficiently deep circuits, which is the experimental signature that we propose to search for. Useful experiments can be performed with 80 qubits and gate infidelity $10^{-3}$, while highly informative experiments should require only 1000 qubits and gate infidelity $10^{-5}$.
Combinatorial optimization on near-term quantum devices is a promising path to demonstrating quantum advantage. However, the capabilities of these devices are constrained by high noise or error rates. In this paper, we propose an iterative Layer VQE (L-VQE) approach, inspired by the Variational Quantum Eigensolver (VQE). We present a large-scale numerical study, simulating circuits with up to 40 qubits and 352 parameters, that demonstrates the potential of the proposed approach. We evaluate quantum optimization heuristics on the problem of detecting multiple communities in networks, for which we introduce a novel qubit-frugal formulation. We numerically compare L-VQE with Quantum Approximate Optimization Algorithm (QAOA) and demonstrate that QAOA achieves lower approximation ratios while requiring significantly deeper circuits. We show that L-VQE is more robust to finite sampling errors and has a higher chance of finding the solution as compared with standard VQE approaches. Our simulation results show that L-VQE performs well under realistic hardware noise.
This expository paper reviews some of the recent uses of computational algebraic geometry in classical and quantum optimization. The paper assumes an elementary background in algebraic geometry and adiabatic quantum computing (AQC), and concentrates on presenting concrete examples (with Python codes tested on a quantum computer) of applying algebraic geometry constructs: solving binary optimization, factoring, and compiling. Reversing the direction, we also briefly describe a novel use of quantum computers to compute Groebner bases for toric ideals. We also show how Groebner bases play a role in studying AQC at a fundamental level within a Morse theory framework. We close by placing our work in perspective, by situating this leg of the journey, as part of a marvelous intellectual expedition that began with our ancients over 4000 years ago.
Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup ov er classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigate using this paradigm to perform $k$-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We compare the performance of this approach to classical $k$-means clustering both numerically and experimentally on IBM Q hardware. We are able to find data sets where coresets work well relative to random sampling and where QAOA could potentially outperform standard $k$-means on a coreset. However, finding data sets where both coresets and QAOA work well--which is necessary for a quantum advantage over $k$-means on the entire data set--appears to be challenging.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا