No Arabic abstract
We investigate prime factorization from two perspectives: quantum annealing and computational algebraic geometry, specifically Grobner bases. We present a novel scalable algorithm which combines the two approaches and leads to the factorization of all bi-primes up to just over $200 , 000$, the largest number factored to date using a quantum processor.
We have developed a framework to convert an arbitrary integer factorization problem to an executable Ising model by first writing it as an optimization function and then transforming the k-bit coupling ($kgeq 3$) terms to quadratic terms using ancillary variables. The method is efficient and uses $mathcal{O}(text{log}^2(N))$ binary variables (qubits) for finding the factors of integer $N$. The method was tested using the D-Wave 2000Q for finding an embedding and determining the prime factors for a given composite number. As examples, we present quantum annealing results for factoring 15, 143, 59989, and 376289 using 4, 12, 59, and 94 logical qubits respectively. The method is general and could be used to factor larger numbers
The road to computing on quantum devices has been accelerated by the promises that come from using Shors algorithm to reduce the complexity of prime factorization. However, this promise hast not yet been realized due to noisy qubits and lack of robust error correction schemes. Here we explore a promising, alternative method for prime factorization that uses well-established techniques from variational imaginary time evolution. We create a Hamiltonian whose ground state encodes the solution to the problem and use variational techniques to evolve a state iteratively towards these prime factors. We show that the number of circuits evaluated in each iteration scales as O(n^{5}d), where n is the bit-length of the number to be factorized and $d$ is the depth of the circuit. We use a single layer of entangling gates to factorize several numbers represented using 7, 8, and 9-qubit Hamiltonians. We also verify the methods performance by implementing it on the IBMQ Lima hardware.
We propose a prime factorizer operated in a framework of quantum annealing (QA). The idea is inverse operation of a multiplier implemented with QA-based Boolean logic circuits. We designed the QA machine on an application-specific-annealing-computing architecture which efficiently increases available hardware budgets at the cost of restricted functionality. The invertible operation of QA logic gates consisting of superconducting flux qubits was confirmed by circuit simulation with classical noise sources. The circuits were implemented and fabricated by using superconducting integrated circuit technologies with Nb/AlOx/Nb Josephson junctions. We also propose a 2.5Dimensional packaging scheme of a qubit-chip/interpose /package-substrate structure for realizing practically large-scale QA systems.
In this paper we consider the use of certain classical analogues to quantum tunneling behavior to improve the performance of simulated annealing on a discrete spin system of the general Ising form. Specifically, we consider the use of multiple simultaneous spin flips at each annealing step as an analogue to quantum spin coherence as well as modifications of the Boltzmann acceptance probability to mimic quantum tunneling. We find that the use of multiple spin flips can indeed be advantageous under certain annealing schedules, but only for long anneal times.
Gaussian boson sampling exploits squeezed states to provide a highly efficient way to demonstrate quantum computational advantage. We perform experiments with 50 input single-mode squeezed states with high indistinguishability and squeezing parameters, which are fed into a 100-mode ultralow-loss interferometer with full connectivity and random transformation, and sampled using 100 high-efficiency single-photon detectors. The whole optical set-up is phase-locked to maintain a high coherence between the superposition of all photon number states. We observe up to 76 output photon-clicks, which yield an output state space dimension of $10^{30}$ and a sampling rate that is $10^{14}$ faster than using the state-of-the-art simulation strategy and supercomputers. The obtained samples are validated against various hypotheses including using thermal states, distinguishable photons, and uniform distribution.