No Arabic abstract
Memory dephasing and its impact on the rate of entanglement generation in quantum repeaters is addressed. For systems that rely on probabilistic schemes for entanglement distribution and connection, we estimate the maximum achievable rate per employed memory for our optimized partial nesting protocol. We show that, for any given distance $L$, the polynomial scaling of rate with distance can only be achieved if quantum memories with coherence times on the order of $L/c$ or longer, with $c$ being the speed of light in the channel, are available. The above rate degrades as a power of $exp[-sqrt{(L/c)/ tau_c}]$ with distance when the coherence time $tau_c ll L/c$.
The construction of large-scale quantum networks relies on the development of practical quantum repeaters. Many approaches have been proposed with the goal of outperforming the direct transmission of photons, but most of them are inefficient or difficult to implement with current technology. Here, we present a protocol that uses a semi-hierarchical structure to improve the entanglement distribution rate while reducing the requirement of memory time to a range of tens of milliseconds. This protocol can be implemented with a fixed distance of elementary links and fixed requirements on quantum memories, which are independent of the total distance. This configuration is especially suitable for scalable applications in large-scale quantum networks.
The realization of a functional quantum repeater is one of the major research goals in long-distance quantum communication. Among the different approaches that are being followed, the one relying on quantum memories interfaced with deterministic quantum emitters is considered as among one of the most promising solutions. In this work, we focus on memory-based quantum-repeater schemes that rely on semiconductor quantum dots for the generation of polarization entangled photons. Going through the most relevant figures of merit related to efficiency of the photon source, we select significant developments in fabrication, processing and tuning techniques aimed at combining high degree of entanglement with on-demand pair generation, with a special focus on the progress achieved in the representative case of the GaAs system. We proceed to offer a perspective on integration with quantum memories, both highlighting preliminary works on natural-artificial atomic interfaces and commenting a wide choice of currently available and potentially viable memory solutions in terms of wavelength, bandwidth and noise-requirements. To complete the overview, we also present recent implementations of entanglement-based quantum communication protocols with quantum dots and highlight the next challenges ahead for the implementation of practical quantum networks.
Efficient all-photonic quantum teleportation requires fast and deterministic sources of highly indistinguishable and entangled photons. Solid-state-based quantum emitters--notably semiconductor quantum dots--are a promising candidate for the role. However, despite the remarkable progress in nanofabrication, proof-of-concept demonstrations of quantum teleportation have highlighted that imperfections of the emitter still place a major roadblock in the way of applications. Here, rather than focusing on source optimization strategies, we deal with imperfections and study different teleportation protocols with the goal of identifying the one with maximal teleportation fidelity. Using a quantum dot with sub-par values of entanglement and photon indistinguishability, we show that the average teleportation fidelity can be raised from below the classical limit to 0.842(14). Our results, which are backed by a theoretical model that quantitatively explains the experimental findings, loosen the very stringent requirements set on the ideal entangled-photon source and highlight that imperfect quantum dots can still have a say in teleportation-based quantum communication architectures.
Quantum enhancements of precision in metrology can be compromised by system imperfections. These may be mitigated by appropriate optimization of the input state to render it robust, at the expense of making the state difficult to prepare. In this paper, we identify the major sources of imperfection an optical sensor: input state preparation inefficiency, sensor losses, and detector inefficiency. The second of these has received much attention; we show that it is the least damaging to surpassing the standard quantum limit in a optical interferometric sensor. Further, we show that photonic states that can be prepared in the laboratory using feasible resources allow a measurement strategy using photon-number-resolving detectors that not only attains the Heisenberg limit for phase estimation in the absence of losses, but also deliver close to the maximum possible precision in realistic scenarios including losses and inefficiencies. In particular, we give bounds for the trade off between the three sources of imperfection that will allow true quantum-enhanced optical metrology.
The impact of measurement imperfections on quantum metrology protocols has been largely ignored, even though these are inherent to any sensing platform in which the detection process exhibits noise that neither can be eradicated, nor translated onto the sensing stage and interpreted as decoherence. In this work, we approach this issue in a systematic manner. Focussing firstly on pure states, we demonstrate how the form of the quantum Fisher information must be modified to account for noisy detection, and propose tractable methods allowing for its approximate evaluation. We then show that in canonical scenarios involving $N$ probes with local measurements undergoing readout noise, the optimal sensitivity dramatically changes its behaviour depending whether global or local control operations are allowed to counterbalance measurement imperfections. In the former case, we prove that the ideal sensitivity (e.g. the Heisenberg scaling) can always be recovered in the asymptotic $N$ limit, while in the latter the readout noise fundamentally constrains the quantum enhancement of sensitivity to a constant factor. We illustrate our findings with an example of an NV-centre measured via the repetitive readout procedure, as well as schemes involving spin-1/2 probes with bit-flip errors affecting their two-outcome measurements, for which we find the input states and control unitary operations sufficient to attain the ultimate asymptotic precision.