ترغب بنشر مسار تعليمي؟ اضغط هنا

Classically simulating near-term partially-distinguishable and lossy boson sampling

442   0   0.0 ( 0 )
 نشر من قبل Alexandra Moylett
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Boson Sampling is the problem of sampling from the same distribution as indistinguishable single photons at the output of a linear optical interferometer. It is an example of a non-universal quantum computation which is believed to be feasible in the near term and cannot be simulated on a classical machine. Like all purported demonstrations of quantum supremacy, this motivates optimizing classical simulation schemes for a realistic model of the problem, in this case Boson Sampling when the implementations experience lost or distinguishable photons. Although current simulation schemes for sufficiently imperfect boson sampling are classically efficient, in principle the polynomial runtime can be infeasibly large. In this work, we develop a scheme for classical simulation of Boson Sampling under uniform distinguishability and loss, based on the idea of sampling from distributions where at most k photons are indistinguishable. We show that asymptotically this scheme can provide a polynomial improvement in the runtime compared to classically simulating idealised Boson Sampling. More significantly, we show that in the regime considered experimentally relevant, our approach gives an substantial improvement in runtime over other classical simulation approaches.

قيم البحث

اقرأ أيضاً

We demonstrate how boson sampling with photons of partial distinguishability can be expressed in terms of interference of fewer photons. We use this observation to propose a classical algorithm to simulate the output of a boson sampler fed with photo ns of partial distinguishability. We find conditions for which this algorithm is efficient, which gives a lower limit on the required indistinguishability to demonstrate a quantum advantage. Under these conditions, adding more photons only polynomially increases the computational cost to simulate a boson sampling experiment.
65 - Malte C. Tichy 2014
The collective interference of partially distinguishable bosons in multi-mode networks is studied via double-sided Feynman diagrams. The probability for many-body scattering events becomes a multi-dimensional tensor-permanent, which interpolates betw een distinguishable particles and identical bosons, and easily extends to mixed initial states. The permanent of the distinguishability matrix, composed of all mutual scalar products of the single-particle mode-functions, emerges as a natural measure for the degree of interference: It yields a bound on the difference between event probabilities for partially distinguishable bosons and the idealized species, and exactly quantifies the degree of bosonic bunching.
Efficient sampling from a classical Gibbs distribution is an important computational problem with applications ranging from statistical physics over Monte Carlo and optimization algorithms to machine learning. We introduce a family of quantum algorit hms that provide unbiased samples by preparing a state encoding the entire Gibbs distribution. We show that this approach leads to a speedup over a classical Markov chain algorithm for several examples including the Ising model and sampling from weighted independent sets of two different graphs. Our approach connects computational complexity with phase transitions, providing a physical interpretation of quantum speedup. Moreover, it opens the door to exploring potentially useful sampling algorithms on near-term quantum devices as the algorithm for sampling from independent sets on certain graphs can be naturally implemented using Rydberg atom arrays.
Noise dominates every aspect of near-term quantum computers, rendering it exceedingly difficult to carry out even small computations. In this paper we are concerned with the modelling of noise in NISQ computers. We focus on three error groups that re present the main sources of noise during a computation and present quantum channels that model each source. We engineer a noise model that combines all three noise channels and simulates the evolution of the quantum computer using its calibrated error rates. We run various experiments of our model, showcasing its behaviour compared to other noise models and an IBM quantum computer. We find that our model provides a better approximation of the quantum computers behaviour than the other models. Following this, we use a genetic algorithm to optimize the parameters used by our noise model, bringing the behaviour of the model even closer to the quantum computer. Finally, a comparison between the pre- and post-optimization parameters reveals that, according to our mode, certain operations can be more or less erroneous than the hardware-calibrated parameters show.
In this work we proof that boson sampling with $N$ particles in $M$ modes is equivalent to short-time evolution with $N$ excitations in an XY model of $2N$ spins. This mapping is efficient whenever the boson bunching probability is small, and errors can be efficiently postselected. This mapping opens the door to boson sampling with quantum simulators or general purpose quantum computers, and highlights the complexity of time-evolution with critical spin models, even for very short times.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا