Do you want to publish a course? Click here

Imperfect Detectors in Linear Optical Quantum Computers

67   0   0.0 ( 0 )
 Added by Scott Glancy
 Publication date 2002
  fields Physics
and research's language is English




Ask ChatGPT about the research

We discuss the effects of imperfect photon detectors suffering from loss and noise on the reliability of linear optical quantum computers. We show that for a given detector efficiency, there is a maximum achievable success probability, and that increasing the number of ancillary photons and detectors used for one controlled sign flip gate beyond a critical point will decrease the probability that the computer will function correctly. We have also performed simulations of some small logic gates and estimate the efficiency and noise levels required for the linear optical quantum computer to function properly.



rate research

Read More

Quantum enhancements of precision in metrology can be compromised by system imperfections. These may be mitigated by appropriate optimization of the input state to render it robust, at the expense of making the state difficult to prepare. In this paper, we identify the major sources of imperfection an optical sensor: input state preparation inefficiency, sensor losses, and detector inefficiency. The second of these has received much attention; we show that it is the least damaging to surpassing the standard quantum limit in a optical interferometric sensor. Further, we show that photonic states that can be prepared in the laboratory using feasible resources allow a measurement strategy using photon-number-resolving detectors that not only attains the Heisenberg limit for phase estimation in the absence of losses, but also deliver close to the maximum possible precision in realistic scenarios including losses and inefficiencies. In particular, we give bounds for the trade off between the three sources of imperfection that will allow true quantum-enhanced optical metrology.
We use the numerical optimization techniques of Uskov et al. [PRA 81, 012303 (2010)] to investigate the behavior of the success rates for KLM style [Nature 409, 46 (2001)] two- and three-qubit entangling gates. The methods are first demonstrated at perfect fidelity, and then extended to imperfect gates. We find that as the perfect fidelity condition is relaxed, the maximum attainable success rates increase in a predictable fashion depending on the size of the system, and we compare that rate of increase for several gates.
Weak value amplification (WVA) is a metrological protocol that amplifies ultra-small physical effects. However, the amplified outcomes necessarily occur with highly suppressed probabilities, leading to the extensive debate on whether the overall measurement precision is improved in comparison to that of conventional measurement (CM). Here, we experimentally demonstrate the unambiguous advantages of WVA that overcome practical limitations including noise and saturation of photo-detection and maintain a shot-noise-scaling precision for a large range of input light intensity well beyond the dynamic range of the photodetector. The precision achieved by WVA is six times higher than that of CM in our setup. Our results clear the way for the widespread use of WVA in applications involving the measurement of small signals including precision metrology and commercial sensors.
109 - John Preskill 1997
The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 10^6 qubits, with a probability of error per quantum gate of order 10^{-6}, would be a formidable factoring engine. Even a smaller, less accurate quantum computer would be able to perform many useful tasks. (This paper is based on a talk presented at the ITP Conference on Quantum Coherence and Decoherence, 15-18 December 1996.)
170 - Lov K. Grover 2000
This article introduces quantum computation by analogy with probabilistic computation. A basic description of the quantum search algorithm is given by representing the algorithm as a C program in a novel way.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا