No Arabic abstract
Gaussian boson sampling is a promising scheme for demonstrating a quantum computational advantage using photonic states that are accessible in a laboratory and, thus, offer scalable sources of quantum light. In this contribution, we study two-point photon-number correlation functions to gain insight into the interference of Gaussian states in optical networks. We investigate the characteristic features of statistical signatures which enable us to distinguish classical from quantum interference. In contrast to the typical implementation of boson sampling, we find additional contributions to the correlators under study which stem from the phase dependence of Gaussian states and which are not observable when Fock states interfere. Using the first three moments, we formulate the tools required to experimentally observe signatures of quantum interference of Gaussian states using two outputs only. By considering the current architectural limitations in realistic experiments, we further show that a statistically significant discrimination between quantum and classical interference is possible even in the presence of loss, noise, and a finite photon-number resolution. Therefore, we formulate and apply a theoretical framework to benchmark the quantum features of Gaussian boson sampling under realistic conditions.
Boson Sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require a universal control over the quantum system, which favours current photonic experimental platforms.Here, we introduce Gaussian Boson Sampling, a classically hard-to-solve problem that uses squeezed states as a non-classical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson Sampling, a #P hard problem, using squeezed states. This approach leads to a more efficient photonic boson sampler with significant advantages in generation probability and measurement time over currently existing protocols.
Gaussian Boson sampling (GBS) provides a highly efficient approach to make use of squeezed states from parametric down-conversion to solve a classically hard-to-solve sampling problem. The GBS protocol not only significantly enhances the photon generation probability, compared to standard boson sampling with single photon Fock states, but also links to potential applications such as dense subgraph problems and molecular vibronic spectra. Here, we report the first experimental demonstration of GBS using squeezed-state sources with simultaneously high photon indistinguishability and collection efficiency. We implement and validate 3-, 4- and 5-photon GBS with high sampling rates of 832 kHz, 163 kHz and 23 kHz, respectively, which is more than 4.4, 12.0, and 29.5 times faster than the previous experiments. Further, we observe a quantum speed-up on a NP-hard optimization problem when comparing with simulated thermal sampler and uniform sampler.
The tantalizing promise of quantum computational speedup in solving certain problems has been strongly supported by recent experimental evidence from a high-fidelity 53-qubit superconducting processor1 and Gaussian boson sampling (GBS) with up to 76 detected photons. Analogous to the increasingly sophisticated Bell tests that continued to refute local hidden variable theories, quantum computational advantage tests are expected to provide increasingly compelling experimental evidence against the Extended Church-Turing thesis. In this direction, continued competition between upgraded quantum hardware and improved classical simulations is required. Here, we report a new GBS experiment that produces up to 113 detection events out of a 144-mode photonic circuit. We develop a new high-brightness and scalable quantum light source, exploring the idea of stimulated squeezed photons, which has simultaneously near-unity purity and efficiency. This GBS is programmable by tuning the phase of the input squeezed states. We demonstrate a new method to efficiently validate the samples by inferring from computationally friendly subsystems, which rules out hypotheses including distinguishable photons and thermal states. We show that our noisy GBS experiment passes the nonclassicality test using an inequality, and we reveal non-trivial genuine high-order correlation in the GBS samples, which are evidence of robustness against possible classical simulation schemes. The photonic quantum computer, Jiuzhang 2.0, yields a Hilbert space dimension up to $10^{43}$, and a sampling rate $10^{24}$ faster than using brute-force simulation on supercomputers.
Since the development of Boson sampling, there has been a quest to construct more efficient and experimentally feasible protocols to test the computational complexity of sampling from photonic states. In this paper we interpret and extend the results presented in [Phys. Rev. Lett. 119, 170501 (2017)]. We derive an expression that relates the probability to measure a specific photon output pattern from a Gaussian state to the textit{hafnian} matrix function and us it to design a Gaussian Boson sampling protocol. Then, we discuss the advantages that this protocol has relative to other photonic protocols and the experimental requirements for Gaussian Boson Sampling. Finally, we relate it to the previously most general protocol, Scattershot Boson Sampling [Phys. Rev. Lett. 113, 100502 (2014)]
Boson sampling is expected to be one of an important milestones that will demonstrate quantum supremacy. The present work establishes the benchmarking of Gaussian boson sampling (GBS) with threshold detection based on the Sunway TaihuLight supercomputer. To achieve the best performance and provide a competitive scenario for future quantum computing studies, the selected simulation algorithm is fully optimized based on a set of innovative approaches, including a parallel scheme and instruction-level optimizing method. Furthermore, data precision and instruction scheduling are handled in a sophisticated manner by an adaptive precision optimization scheme and a DAG-based heuristic search algorithm, respectively. Based on these methods, a highly efficient and parallel quantum sampling algorithm is designed. The largest run enables us to obtain one Torontonian function of a 100 x 100 submatrix from 50-photon GBS within 20 hours in 128-bit precision and 2 days in 256-bit precision.