No Arabic abstract
Quantum mechanics promises computational powers beyond the reach of classical computers. Current technology is on the brink of an experimental demonstration of the superior power of quantum computation compared to classical devices. For such a demonstration to be meaningful, experimental noise must not affect the computational power of the device; this occurs when a classical algorithm can use the noise to simulate the quantum system. In this work, we demonstrate an algorithm which simulates boson sampling, a quantum advantage demonstration based on many-body quantum interference of indistinguishable bosons, in the presence of optical loss. Finding the level of noise where this approximation becomes efficient lets us map out the maximum level of imperfections at which it is still possible to demonstrate a quantum advantage. We show that current photonic technology falls short of this benchmark. These results call into question the suitability of boson sampling as a quantum advantage demonstration.
We study the hardness of classically simulating Gaussian boson sampling at nonzero photon distinguishability. We find that similar to regular boson sampling, distinguishability causes exponential attenuation of the many-photon interference terms in Gaussian boson sampling. Barring an open problem in the theory of matrix permanents, this leads to an efficient classical algorithm to simulate Gaussian boson sampling in the presence of distinguishability. We also study a new form of boson sampling based on photon number superposition states, for which we also show noise sensivity. The fact that such superposition boson sampling is not simulable with out method at zero distinguishability is the first evidence for the computational hardness of this problem.
Since its introduction Boson Sampling has been the subject of intense study in the world of quantum computing. The task is to sample independently from the set of all $n times n$ submatrices built from possibly repeated rows of a larger $m times n$ complex matrix according to a probability distribution related to the permanents of the submatrices. Experimental systems exploiting quantum photonic effects can in principle perform the task at great speed. In the framework of classical computing, Aaronson and Arkhipov (2011) showed that exact Boson Sampling problem cannot be solved in polynomial time unless the polynomial hierarchy collapses to the third level. Indeed for a number of years the fastest known exact classical algorithm ran in $O({m+n-1 choose n} n 2^n )$ time per sample, emphasising the potential speed advantage of quantum computation. The advantage was reduced by Clifford and Clifford (2018) who gave a significantly faster classical solution taking $O(n 2^n + operatorname{poly}(m,n))$ time and linear space, matching the complexity of computing the permanent of a single matrix when $m$ is polynomial in $n$. We continue by presenting an algorithm for Boson Sampling whose average-case time complexity is much faster when $m$ is proportional to $n$. In particular, when $m = n$ our algorithm runs in approximately $O(ncdot1.69^n)$ time on average. This result further increases the problem size needed to establish quantum computational supremacy via Boson Sampling.
Gaussian Boson sampling (GBS) provides a highly efficient approach to make use of squeezed states from parametric down-conversion to solve a classically hard-to-solve sampling problem. The GBS protocol not only significantly enhances the photon generation probability, compared to standard boson sampling with single photon Fock states, but also links to potential applications such as dense subgraph problems and molecular vibronic spectra. Here, we report the first experimental demonstration of GBS using squeezed-state sources with simultaneously high photon indistinguishability and collection efficiency. We implement and validate 3-, 4- and 5-photon GBS with high sampling rates of 832 kHz, 163 kHz and 23 kHz, respectively, which is more than 4.4, 12.0, and 29.5 times faster than the previous experiments. Further, we observe a quantum speed-up on a NP-hard optimization problem when comparing with simulated thermal sampler and uniform sampler.
Although stoquastic Hamiltonians are known to be simulable via sign-problem-free quantum Monte Carlo (QMC) techniques, the non-stoquasticity of a Hamiltonian does not necessarily imply the existence of a QMC sign problem. We give a sufficient and necessary condition for the QMC-simulability of Hamiltonians in a fixed basis in terms of geometric phases associated with the chordless cycles of the weighted graphs whose adjacency matrices are the Hamiltonians. We use our findings to provide a construction for non-stoquastic, yet sign-problem-free and hence QMC-simulable, quantum many-body models. We also demonstrate why the simulation of truly sign-problematic models using the QMC weights of the stoquasticized Hamiltonian is generally sub-optimal. We offer a superior alternative.
Boson sampling is considered as a strong candidate to demonstrate the quantum computational supremacy over classical computers. However, previous proof-of-principle experiments suffered from small photon number and low sampling rates owing to the inefficiencies of the single-photon sources and multi-port optical interferometers. Here, we develop two central components for high-performance boson sampling: robust multi-photon interferometers with 0.99 transmission rate, and actively demultiplexed single-photon sources from a quantum-dot-micropillar with simultaneously high efficiency, purity and indistinguishability. We implement and validate 3-, 4-, and 5-photon boson sampling, and achieve sampling rates of 4.96 kHz, 151 Hz, and 4 Hz, respectively, which are over 24,000 times faster than the previous experiments, and over 220 times faster than obtaining one sample through calculating the matrices permanent using the first electronic computer (ENIAC) and transistorized computer (TRADIC) in the human history. Our architecture is feasible to be scaled up to larger number of photons and with higher rate to race against classical computers, and might provide experimental evidence against the Extended Church-Turing Thesis.