ترغب بنشر مسار تعليمي؟ اضغط هنا

Inefficiency of classically simulating linear optical quantum computing with Fock-state inputs

137   0   0.0 ( 0 )
 نشر من قبل Bryan Gard
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Aaronson and Arkhipov recently used computational complexity theory to argue that classical computers very likely cannot efficiently simulate linear, multimode, quantum-optical interferometers with arbitrary Fock-state inputs [Aaronson and Arkhipov, Theory Comput. 9, 143 (2013)]. Here we present an elementary argument that utilizes only techniques from quantum optics. We explicitly construct the Hilbert space for such an interferometer and show that its dimension scales exponentially with all the physical resources. We also show in a simple example just how the Schrodinger and Heisenberg pictures of quantum theory, while mathematically equivalent, are not in general computationally equivalent. Finally, we conclude our argument by comparing the symmetry requirements of multiparticle bosonic to fermionic interferometers and, using simple physical reasoning, connect the nonsimulatability of the bosonic device to the complexity of computing the permanent of a large matrix.



قيم البحث

اقرأ أيضاً

We use a combination of analytical and numerical techniques to calculate the noise threshold and resource requirements for a linear optical quantum computing scheme based on parity-state encoding. Parity-state encoding is used at the lowest level of code concatenation in order to efficiently correct errors arising from the inherent nondeterminism of two-qubit linear-optical gates. When combined with teleported error-correction (using either a Steane or Golay code) at higher levels of concatenation, the parity-state scheme is found to achieve a saving of approximately three orders of magnitude in resources when compared to a previous scheme, at a cost of a somewhat reduced noise threshold.
Linear optics with photon counting is a prominent candidate for practical quantum computing. The protocol by Knill, Laflamme, and Milburn [Nature 409, 46 (2001)] explicitly demonstrates that efficient scalable quantum computing with single photons, l inear optical elements, and projective measurements is possible. Subsequently, several improvements on this protocol have started to bridge the gap between theoretical scalability and practical implementation. We review the original theory and its improvements, and we give a few examples of experimental two-qubit gates. We discuss the use of realistic components, the errors they induce in the computation, and how these errors can be corrected.
Unlike the fundamental forces of the Standard Model, such as electromagnetic, weak and strong forces, the quantum effects of gravity are still experimentally inaccessible. The weak coupling of gravity with matter makes it significant only for large m asses where quantum effects are too subtle to be measured with current technology. Nevertheless, insight into quantum aspects of gravity is key to understanding unification theories, cosmology or the physics of black holes. Here we propose the simulation of quantum gravity with optical lattices which allows us to arbitrarily control coupling strengths. More concretely, we consider $(2+1)$-dimensional Dirac fermions, simulated by ultra-cold fermionic atoms arranged in a honeycomb lattice, coupled to massive quantum gravity, simulated by bosonic atoms positioned at the links of the lattice. The quantum effects of gravity induce interactions between the Dirac fermions that can be witnessed, for example, through the violation of Wicks theorem. The similarity of our approach to current experimental simulations of gauge theories suggests that quantum gravity models can be simulated in the laboratory in the near future.
We study the computational power of unitary Clifford circuits with solely magic state inputs (CM circuits), supplemented by classical efficient computation. We show that CM circuits are hard to classically simulate up to multiplicative error (assumin g PH non-collapse), and also up to additive error under plausible average-case hardness conjectures. Unlike other such known classes, a broad variety of possible conjectures apply. Along the way we give an extension of the Gottesman-Knill theorem that applies to universal computation, showing that for Clifford circuits with joint stabiliser and non-stabiliser inputs, the stabiliser part can be eliminated in favour of classical simulation, leaving a Clifford circuit on only the non-stabiliser part. Finally we discuss implementational advantages of CM circuits.
Many existing schemes for linear-optical quantum computing (LOQC) depend on multiplexing (MUX), which uses dynamic routing to enable near-deterministic gates and sources to be constructed using heralded, probabilistic primitives. MUXing accounts for the overwhelming majority of active switching demands in current LOQC architectures. In this manuscript, we introduce relative multiplexing (RMUX), a general-purpose optimization which can dramatically reduce the active switching requirements for MUX in LOQC, and thereby reduce hardware complexity and energy consumption, as well as relaxing demands on performance for various photonic components. We discuss the application of RMUX to the generation of entangled states from probabilistic single-photon sources, and argue that an order of magnitude improvement in the rate of generation of Bell states can be achieved. In addition, we apply RMUX to the proposal for percolation of a 3D cluster state in [PRL 115, 020502 (2015)], and we find that RMUX allows a 2.4x increase in loss tolerance for this architecture.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا