Do you want to publish a course? Click here

Unraveling Quantum Annealers using Classical Hardness

308   0   0.0 ( 0 )
 Added by Victor Martin-Mayor
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, named `D-Wave chips, promise to solve practical optimization problems potentially faster than conventional `classical computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of quantum annealers from their classical thermal counterparts. Here, we propose a general method aimed at answering these, and apply it to experimentally study the D-Wave chip. Inspired by spin-glass theory, we generate optimization problems with a wide spectrum of `classical hardness, which we also define. By investigating the chips response to classical hardness, we surprisingly find that the chips performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss purely classical effects that possibly mask the quantum behavior of the chip.



rate research

Read More

The numerical solution of partial differential equations by discretization techniques is ubiquitous in computational physics. In this work we benchmark this approach in the quantum realm by solving the heat equation for a square plate subject to fixed temperatures at the edges and random heat sources and sinks within the domain. The hybrid classical-quantum approach consists in the solution on a quantum computer of the coupled linear system of equations that result from the discretization step. Owing to the limitations in the number of qubits and their connectivity, we use the Gauss-Seidel method to divide the full system of linear equations into subsystems, which are solved iteratively in block fashion. Each of the linear subsystems were solved using 2000Q and Advantage quantum computers developed by D-Wave Systems Inc. By comparing classical numerical and quantum solutions, we observe that the errors and chain break fraction are, on average, greater on the 2000Q system. Unlike the classical Gauss-Seidel method, the errors of the quantum solutions level off after a few iterations of our algorithm. This is partly a result of the span of the real number line available from the mapping of the chosen size of the set of qubit states. We verified this by using techniques to progressively shrink the range mapped by the set of qubit states at each iteration (increasing floating-point accuracy). As a result, no leveling off is observed. However, an increase in qubits does not translate to an overall lower error. This is believed to be indicative of the increasing length of chains required for the mapping to real numbers and the ensuing limitations of hardware.
Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Simulated annealing is a computational technique which explores the configuration space by mimicking thermal noise. By slow cooling, it freezes the system in a low-energy configuration, but the algorithm often gets stuck in local minima. In quantum annealing, the thermal noise is replaced by controllable quantum fluctuations, and the technique can be implemented in modern quantum simulators. However, quantum-adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasi-degenerate ground states.
We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences. We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset. We furthermore complement the training with quantum-enhanced sampling from the D-Wave 2000Q annealer, finding results comparable with classical techniques and with marginal improvements in some cases. These results underline the relevance of probabilistic methods in constructing neural networks and highlight a novel scenario of practical relevance where quantum computers, even with limited hardware capabilites, could provide advantages over classical computers. This work is dedicated to the memory of Peter Wittek.
Recent tests performed on the D-Wave Two quantum annealer have revealed no clear evidence of speedup over conventional silicon-based technologies. Here, we present results from classical parallel-tempering Monte Carlo simulations combined with isoenergetic cluster moves of the archetypal benchmark problem-an Ising spin glass-on the native chip topology. Using realistic uncorrelated noise models for the D-Wave Two quantum annealer, we study the best-case resilience, i.e., the probability that the ground-state configuration is not affected by random fields and random-bond fluctuations found on the chip. We thus compute classical upper-bound success probabilities for different types of disorder used in the benchmarks and predict that an increase in the number of qubits will require either error correction schemes or a drastic reduction of the intrinsic noise found in these devices. We outline strategies to develop robust, as well as hard benchmarks for quantum annealing devices, as well as any other computing paradigm affected by noise.
We propose an efficient numerical method to compute configuration averages of observables in disordered open quantum systems whose dynamics can be unraveled via stochastic trajectories. We prove that the optimal sampling of trajectories and disorder configurations is simply achieved by considering one random disorder configuration for each individual trajectory. As a first application, we exploit the present method to the study the role of disorder on the physics of the driven-dissipative Bose-Hubbard model in two different regimes: (i) for strong interactions, we explore the dissipative physics of fermionized bosons in disordered one-dimensional chains; (ii) for weak interactions, we investigate the role of on-site inhomogeneities on a first-order dissipative phase transition in a two-dimensional square lattice.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا