No Arabic abstract
We propose a method for reconfiguring a relay node for polarization encoded quantum key distribution (QKD) networks. The relay can be switched between trusted and untrusted modes to adapt to different network conditions, relay distances, and security requirements. This not only extends the distance over which a QKD network operates but also enables point-to-multipoint (P2MP) network topologies. The proposed architecture centralizes the expensive and delicate single-photon detectors (SPDs) at the relay node with eased maintenance and cooling while simplifying each user node so that it only needs commercially available devices for low-cost qubit preparation.
We propose an integrated photonics device for mapping qubits encoded in the polarization of a photon onto the spin state of a solid-state defect coupled to a photonic crystal cavity: a `Polarization-Encoded Photon-to-Spin Interface (PEPSI). We perform a theoretical analysis of the state fidelitys dependence on the devices polarization extinction ratio and atom-cavity cooperativity. Furthermore, we explore the rate-fidelity trade-off through analytical and numerical models. In simulation, we show that our design enables efficient, high fidelity photon-to-spin mapping.
Due to physical orientations and birefringence effects, practical quantum information protocols utilizing optical polarization need to handle misalignment between preparation and measurement reference frames. For any such capable system, an important question is how many resources -- e.g., measured single photons -- are needed to reliably achieve alignment precision sufficient for the desired quantum protocol. Here we study the performance of a polarization-frame alignment scheme used in prior laboratory and field quantum key distribution (QKD) experiments by performing Monte Carlo numerical simulations. The scheme utilizes, to the extent possible, the same single-photon-level signals and measurements as for the QKD protocol being supported. Even with detector noise and imperfect sources, our analysis shows that only a small fraction of resources from the overall signal -- a few hundred photon detections, in total -- are required for good performance, restoring the state to better than 99% of its original quality.
We consider a system consisting of a server, which receives updates for $N$ files according to independent Poisson processes. The goal of the server is to deliver the latest version of the files to the user through a parallel network of $K$ caches. We consider an update received by the user successful, if the user receives the same file version that is currently prevailing at the server. We derive an analytical expression for information freshness at the user. We observe that freshness for a file increases with increase in consolidation of rates across caches. To solve the multi-cache problem, we first solve the auxiliary problem of a single-cache system. We then rework this auxiliary solution to our parallel-cache network by consolidating rates to single routes as much as possible. This yields an approximate (sub-optimal) solution for the original problem. We provide an upper bound on the gap between the sub-optimal solution and the optimal solution. Numerical results show that the sub-optimal policy closely approximates the optimal policy.
We present a silicon optical transmitter for polarization-encoded quantum key distribution (QKD). The chip was fabricated in a standard silicon photonic foundry process and integrated a pulse generator, intensity modulator, variable optical attenuator, and polarization modulator in a 1.3 mm $times$ 3 mm die area. The devices in the photonic circuit meet the requirements for QKD. The transmitter was used in a proof-of-concept demonstration of the BB84 QKD protocol over a 5 km long fiber link.
New optical technologies offer the ability to reconfigure network topologies dynamically, rather than setting them once and for all. This is true in both optical wide area networks (optical WANs) and in datacenters, despite the many differences between these two settings. Because of these new technologies, there has been a surge of both practical and theoretical research on algorithms to take advantage of them. In particular, Jia et al. [INFOCOM 17] designed online scheduling algorithms for dynamically reconfigurable topologies for both the makespan and sum of completion times objectives. In this paper, we work in the same setting but study an objective that is more meaningful in an online setting: the sum of flow times. The flow time of a job is the total amount of time that it spends in the system, which may be considerably smaller than its completion time if it is released late. We provide competitive algorithms for the online setting with speed augmentation, and also give a lower bound proving that speed augmentation is in fact necessary. As a side effect of our techniques, we also improve and generalize the results of Jia et al. on completion times by giving an $O(1)$-competitive algorithm for arbitrary sizes and release times even when nodes have different degree bounds, and moreover allow for the weighted sum of completion times (or flow times).