ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum Annealing amid Local Ruggedness and Global Frustration

63   0   0.0 ( 0 )
 نشر من قبل James King
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A recent Google study [Phys. Rev. X, 6:031015 (2016)] compared a D-Wave 2X quantum processing unit (QPU) to two classical Monte Carlo algorithms: simulated annealing (SA) and quantum Monte Carlo (QMC). The study showed the D-Wave 2X to be up to 100 million times faster than the classical algorithms. The Google inputs are designed to demonstrate the value of collective multiqubit tunneling, a resource available to D-Wave QPUs but not to simulated annealing. But the computational hardness in these inputs is highly localized in gadgets, with only a small amount of complexity coming from global interactions, meaning that the relevance to real-world problems is limited. In this study we provide a new synthetic problem class that addresses the limitations of the Google inputs while retaining their strengths. We use simple clusters instead of more complex gadgets and more emphasis is placed on creating computational hardness through frustrated global interactions like those seen in interesting real-world inputs. The logical problems used to generate these inputs can be solved in polynomial time [J. Phys. A, 15:10 (1982)]. However, for general heuristic algorithms that are unaware of the planted problem class, the frustration creates meaningful difficulty in a controlled environment ideal for study. We use these inputs to evaluate the new 2000-qubit D-Wave QPU. We include the HFS algorithm---the best performer in a broader analysis of Google inputs---and we include state-of-the-art GPU implementations of SA and QMC. The D-Wave QPU solidly outperforms the software solvers: when we consider pure annealing time (computation time), the D-Wave QPU reaches ground states up to 2600 times faster than the competition. In the task of zero-temperature Boltzmann sampling from challenging multimodal inputs, the D-Wave QPU holds a similar advantage as quantum sampling bias does not seem significant.

قيم البحث

اقرأ أيضاً

165 - Satoshi Morita 2007
New annealing schedules for quantum annealing are proposed based on the adiabatic theorem. These schedules exhibit faster decrease of the excitation probability than a linear schedule. To derive this conclusion, the asymptotic form of the excitation probability for quantum annealing is explicitly obtained in the limit of long annealing time. Its first-order term, which is inversely proportional to the square of the annealing time, is shown to be determined only by the information at the initial and final times. Our annealing schedules make it possible to drop this term, thus leading to a higher order (smaller) excitation probability. We verify these results by solving numerically the time-dependent Schrodinger equation for small size systems
Even photosynthesis -- the most basic natural phenomenon underlying Life on Earth -- involves the non-trivial processing of excitations at the pico- and femtosecond scales during light-harvesting. The desire to understand such natural phenomena, as w ell as interpret the output from ultrafast experimental probes, creates an urgent need for accurate quantitative theories of open quantum systems. However it is unclear how best to generalize the well-established assumptions of an isolated system, particularly under non-equilibrium conditions. Here we compare two popular approaches: a description in terms of a direct product of the states of each individual system (i.e. a local approach) versus the use of new states resulting from diagonalizing the whole Hamiltonian (i.e. a global approach). We show that their equivalence fails when the system is open, in particular under the experimentally ubiquitous condition of a temperature gradient. By solving for the steady-state populations and calculating the heat flux as a test observable, we uncover stark differences between the formulations. This divergence highlights the need to establish rigorous ranges of applicability for such methods in modeling nanoscale transfer phenomena -- including during the light-harvesting process in photosynthesis.
We devise an approach to characterizing the intricate interplay between classical and quantum interference of two-photon states in a network, which comprises multiple time-bin modes. By controlling the phases of delocalized single photons, we manipul ate the global mode structure, resulting in distinct two-photon interference phenomena for time-bin resolved (local) and time-bucket (global) coincidence detection. This coherent control over the photons mode structure allows for synthesizing two-photon interference patterns, where local measurements yield standard Hong-Ou-Mandel dips while the global two-photon visibility is governed by the overlap of the delocalized single-photon states. Thus, our experiment introduces a method for engineering distributed quantum interferences in networks.
A broad range of quantum optimisation problems can be phrased as the question whether a specific system has a ground state at zero energy, i.e. whether its Hamiltonian is frustration free. Frustration-free Hamiltonians, in turn, play a central role f or constructing and understanding new phases of matter in quantum many-body physics. Unfortunately, determining whether this is the case is known to be a complexity-theoretically intractable problem. This makes it highly desirable to search for efficient heuristics and algorithms in order to, at least, partially answer this question. Here we prove a general criterion - a sufficient condition - under which a local Hamiltonian is guaranteed to be frustration free by lifting Shearers theorem from classical probability theory to the quantum world. Remarkably, evaluating this condition proceeds via a fully classical analysis of a hard-core lattice gas at negative fugacity on the Hamiltonians interaction graph which, as a statistical mechanics problem, is of interest in its own right. We concretely apply this criterion to local Hamiltonians on various regular lattices, while bringing to bear the tools of spin glass physics which permit us to obtain new bounds on the SAT/UNSAT transition in random quantum satisfiability. These also lead us to natural conjectures for when such bounds will be tight, as well as to a novel notion of universality for these computer science problems. Besides providing concrete algorithms leading to detailed and quantitative insights, this underscores the power of marrying classical statistical mechanics with quantum computation and complexity theory.
We present a general error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. Given any Ising model optimization problem, the encoding replaces each logical qub it by a complete graph of degree $C$, representing the distance of the error-correcting code. A subsequent minor-embedding step then implements the encoding on the underlying hardware graph of the quantum annealer. We demonstrate experimentally that the performance of a D-Wave Two quantum annealing device improves as $C$ grows. We show that the performance improvement can be interpreted as arising from an effective increase in the energy scale of the problem Hamiltonian, or equivalently, an effective reduction in the temperature at which the device operates. The number $C$ thus allows us to control the amount of protection against thermal and control errors, and in particular, to trade qubits for a lower effective temperature that scales as $C^{-eta}$, with $eta leq 2$. This effective temperature reduction is an important step towards scalable quantum annealing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا