ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-photon boson-sampling machines beating early classical computers

98   0   0.0 ( 0 )
 نشر من قبل Chao-Yang Lu
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Boson sampling is considered as a strong candidate to demonstrate the quantum computational supremacy over classical computers. However, previous proof-of-principle experiments suffered from small photon number and low sampling rates owing to the inefficiencies of the single-photon sources and multi-port optical interferometers. Here, we develop two central components for high-performance boson sampling: robust multi-photon interferometers with 0.99 transmission rate, and actively demultiplexed single-photon sources from a quantum-dot-micropillar with simultaneously high efficiency, purity and indistinguishability. We implement and validate 3-, 4-, and 5-photon boson sampling, and achieve sampling rates of 4.96 kHz, 151 Hz, and 4 Hz, respectively, which are over 24,000 times faster than the previous experiments, and over 220 times faster than obtaining one sample through calculating the matrices permanent using the first electronic computer (ENIAC) and transistorized computer (TRADIC) in the human history. Our architecture is feasible to be scaled up to larger number of photons and with higher rate to race against classical computers, and might provide experimental evidence against the Extended Church-Turing Thesis.



قيم البحث

اقرأ أيضاً

A new algorithm which is called Store-zechin, and utilizes stored data repetitively for calculating the permanent of an n * n matrix is proposed. The analysis manifests that the numbers of multiplications and additions taken by the new algorithm are respectively far smaller than those taken by the famous Ryser algorithm. Especially, for a 5-boson sampling task, the running time of the Store-zechin algorithm computing the correspondent permanent on ENIAC as well as TRADIC is lower than that of the sampling operation on a multiphoton boson sampling machine (shortly MPBSM), and thus MPBSM does not beat the early classical computers (despite of this, it is possible that when n gets large enough, a quantum boson sampling machine will beat a classical computer). On a computer, people can design an algorithm that exchanges space for time while on MPBSM, people can not do so, which is the greatest difference between a universal computer and MPBSM. This difference is right the reason why MPBSM may not be called a (photonic) quantum computer.
Since its introduction Boson Sampling has been the subject of intense study in the world of quantum computing. The task is to sample independently from the set of all $n times n$ submatrices built from possibly repeated rows of a larger $m times n$ c omplex matrix according to a probability distribution related to the permanents of the submatrices. Experimental systems exploiting quantum photonic effects can in principle perform the task at great speed. In the framework of classical computing, Aaronson and Arkhipov (2011) showed that exact Boson Sampling problem cannot be solved in polynomial time unless the polynomial hierarchy collapses to the third level. Indeed for a number of years the fastest known exact classical algorithm ran in $O({m+n-1 choose n} n 2^n )$ time per sample, emphasising the potential speed advantage of quantum computation. The advantage was reduced by Clifford and Clifford (2018) who gave a significantly faster classical solution taking $O(n 2^n + operatorname{poly}(m,n))$ time and linear space, matching the complexity of computing the permanent of a single matrix when $m$ is polynomial in $n$. We continue by presenting an algorithm for Boson Sampling whose average-case time complexity is much faster when $m$ is proportional to $n$. In particular, when $m = n$ our algorithm runs in approximately $O(ncdot1.69^n)$ time on average. This result further increases the problem size needed to establish quantum computational supremacy via Boson Sampling.
Quantum mechanics promises computational powers beyond the reach of classical computers. Current technology is on the brink of an experimental demonstration of the superior power of quantum computation compared to classical devices. For such a demons tration to be meaningful, experimental noise must not affect the computational power of the device; this occurs when a classical algorithm can use the noise to simulate the quantum system. In this work, we demonstrate an algorithm which simulates boson sampling, a quantum advantage demonstration based on many-body quantum interference of indistinguishable bosons, in the presence of optical loss. Finding the level of noise where this approximation becomes efficient lets us map out the maximum level of imperfections at which it is still possible to demonstrate a quantum advantage. We show that current photonic technology falls short of this benchmark. These results call into question the suitability of boson sampling as a quantum advantage demonstration.
68 - Hui Wang , Wei Li , Xiao Jiang 2018
Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is t he scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16*16 mode ultra-low-loss photonic circuit, and we detect three-, four- and five-fold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187 kHz, 13.6 kHz, and 0.78 kHz for five-, six- and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.
We experimentally demonstrate quantum enhanced resolution in confocal fluorescence microscopy exploiting the non-classical photon statistics of single nitrogen-vacancy colour centres in diamond. By developing a general model of super-resolution based on the direct sampling of the kth-order autocorrelation function of the photoluminescence signal, we show the possibility, in some cases, to resolve in principle arbitrarily close emitting centers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا