No Arabic abstract
We benchmark the quantum processing units of the largest quantum annealers to date, the 5000+ qubit quantum annealer Advantage and its 2000+ qubit predecessor D-Wave 2000Q, using tail assignment and exact cover problems from aircraft scheduling scenarios. The benchmark set contains small, intermediate, and large problems with both sparsely connected and almost fully connected instances. We find that Advantage outperforms D-Wave 2000Q for almost all problems, with a notable increase in success rate and problem size. In particular, Advantage is also able to solve the largest problems with 120 logical qubits that D-Wave 2000Q cannot solve anymore. Furthermore, problems that can still be solved by D-Wave 2000Q are solved faster by Advantage. We find that D-Wave 2000Q can only achieve better success rates for a few very sparsely connected problems.
The development of quantum-classical hybrid (QCH) algorithms is critical to achieve state-of-the-art computational models. A QCH variational autoencoder (QVAE) was introduced in Ref. [1] by some of the authors of this paper. QVAE consists of a classical auto-encoding structure realized by traditional deep neural networks to perform inference to, and generation from, a discrete latent space. The latent generative process is formalized as thermal sampling from either a quantum or classical Boltzmann machine (QBM or BM). This setup allows quantum-assisted training of deep generative models by physically simulating the generative process with quantum annealers. In this paper, we have successfully employed D-Wave quantum annealers as Boltzmann samplers to perform quantum-assisted, end-to-end training of QVAE. The hybrid structure of QVAE allows us to deploy current-generation quantum annealers in QCH generative models to achieve competitive performance on datasets such as MNIST. The results presented in this paper suggest that commercially available quantum annealers can be deployed, in conjunction with well-crafted classical deep neutral networks, to achieve competitive results in unsupervised and semisupervised tasks on large-scale datasets. We also provide evidence that our setup is able to exploit large latent-space (Q)BMs, which develop slowly mixing modes. This expressive latent space results in slow and inefficient classical sampling, and paves the way to achieve quantum advantage with quantum annealing in realistic sampling applications.
We investigate an extended version of the quantum Ising model which includes beyond-nearest neighbour interactions and an additional site-dependent longitudinal magnetic field. Treating the interaction exactly and using perturbation theory in the longitudinal field, we calculate the energy spectrum and find that the presence of beyond-nearest-neighbour interactions enhances the minimal gap between the ground state and the first excited state, irrespective of the nature of decay of these interactions along the chain. The longitudinal field adds a correction to this gap that is independent of the number of qubits. We discuss the application of our model to implementing specific instances of 3-satisfiability problems (Exact Cover) and make a connection to a chain of flux qubits.
Quantum computer, harnessing quantum superposition to boost a parallel computational power, promises to outperform its classical counterparts and offer an exponentially increased scaling. The term quantum advantage was proposed to mark the key point when people can solve a classically intractable problem by artificially controlling a quantum system in an unprecedented scale, even without error correction or known practical applications. Boson sampling, a problem about quantum evolutions of multi-photons on multimode photonic networks, as well as its variants, has been considered as a promising candidate to reach this milestone. However, the current photonic platforms suffer from the scaling problems, both in photon numbers and circuit modes. Here, we propose a new variant of the problem, timestamp membosonsampling, exploiting the timestamp information of single photons as free resources, and the scaling of the problem can be in principle extended to infinitely large. We experimentally verify the scheme on a self-looped photonic chip inspired by memristor, and obtain multi-photon registrations up to 56-fold in 750,000 modes with a Hilbert space up to $10^{254}$. Our work exhibits an integrated and cost-efficient shortcut stepping into the quantum advantage regime in a photonic system far beyond previous scenarios, and provide a scalable and controllable platform for quantum information processing.
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, named `D-Wave chips, promise to solve practical optimization problems potentially faster than conventional `classical computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of quantum annealers from their classical thermal counterparts. Here, we propose a general method aimed at answering these, and apply it to experimentally study the D-Wave chip. Inspired by spin-glass theory, we generate optimization problems with a wide spectrum of `classical hardness, which we also define. By investigating the chips response to classical hardness, we surprisingly find that the chips performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss purely classical effects that possibly mask the quantum behavior of the chip.
Restricted Boltzmann Machine (RBM) is an energy based, undirected graphical model. It is commonly used for unsupervised and supervised machine learning. Typically, RBM is trained using contrastive divergence (CD). However, training with CD is slow and does not estimate exact gradient of log-likelihood cost function. In this work, the model expectation of gradient learning for RBM has been calculated using a quantum annealer (D-Wave 2000Q), which is much faster than Markov chain Monte Carlo (MCMC) used in CD. Training and classification results are compared with CD. The classification accuracy results indicate similar performance of both methods. Image reconstruction as well as log-likelihood calculations are used to compare the performance of quantum and classical algorithms for RBM training. It is shown that the samples obtained from quantum annealer can be used to train a RBM on a 64-bit `bars and stripes data set with classification performance similar to a RBM trained with CD. Though training based on CD showed improved learning performance, training using a quantum annealer eliminates computationally expensive MCMC steps of CD.