ترغب بنشر مسار تعليمي؟ اضغط هنا

Performance of real-time adaptive optics compensation in a turbulent channel with high-dimensional spatial-mode encoding

289   0   0.0 ( 0 )
 نشر من قبل Jiapeng Zhao
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The orbital angular momentum (OAM) of photons is a promising degree of freedom for high-dimensional quantum key distribution (QKD). However, effectively mitigating the adverse effects of atmospheric turbulence is a persistent challenge in OAM QKD systems operating over free-space communication channels. In contrast to previous works focusing on correcting static simulated turbulence, we investigate the performance of OAM QKD in real atmospheric turbulence with real-time adaptive optics (AO) correction. We show that, even our AO system provides a limited correction, it is possible to mitigate the errors induced by weak turbulence and establish a secure channel. The crosstalk induced by turbulence and the performance of AO systems is investigated in two configurations: a lab-scale link with controllable turbulence, and a 340 m long cross-campus link with dynamic atmospheric turbulence. Our experimental results suggest that an advanced AO system with fine beam tracking, reliable beam stabilization, precise wavefront sensing, and accurate wavefront correction is necessary to adequately correct turbulence-induced error. We also propose and demonstrate different solutions to improve the performance of OAM QKD with turbulence, which could enable the possibility of OAM encoding in strong turbulence.



قيم البحث

اقرأ أيضاً

Sampling is classically performed by recording the amplitude of an input signal at given time instants; however, sampling and reconstructing a signal using multiple devices in parallel becomes a more difficult problem to solve when the devices have a n unknown shift in their clocks. Alternatively, one can record the times at which a signal (or its integral) crosses given thresholds. This can model integrate-and-fire neurons, for example, and has been studied by Lazar and Toth under the name of ``Time Encoding Machines. This sampling method is closer to what is found in nature. In this paper, we show that, when using time encoding machines, reconstruction from multiple channels has a more intuitive solution, and does not require the knowledge of the shifts between machines. We show that, if single-channel time encoding can sample and perfectly reconstruct a $mathbf{2Omega}$-bandlimited signal, then $mathbf{M}$-channel time encoding with shifted integrators can sample and perfectly reconstruct a signal with $mathbf{M}$ times the bandwidth. Furthermore, we present an algorithm to perform this reconstruction and prove that it converges to the correct unique solution, in the noiseless case, without knowledge of the relative shifts between the integrators of the machines. This is quite unlike classical multi-channel sampling, where unknown shifts between sampling devices pose a problem for perfect reconstruction.
Dead time effects have been considered a major limitation for fast data acquisition in various time-correlated single photon counting applications, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. Through the application of lidar ranging, this work explores the empirical distribution of detection times in the presence of dead time and demonstrates that an accurate statistical model can result in reduced ranging error with shorter data acquisition time when operating in the high-flux regime. Specifically, we show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain. Depth estimation can then be performed by passing the empirical distribution through a filter matched to the stationary distribution. Moreover, based on the Markov chain model, we formulate the recovery of arrival distribution from detection distribution as a nonlinear inverse problem and solve it via provably convergent mathematical optimization. By comparing per-detection Fisher information for depth estimation from high- and low-flux detection time distributions, we provide an analytical basis for possible improvement of ranging performance resulting from the presence of dead time. Finally, we demonstrate the effectiveness of our formulation and algorithm via simulations of lidar ranging.
Free-space communication links are severely affected by atmospheric turbulence, which causes degradation in the transmitted signal. One of the most common solutions to overcome this is to exploit diversity. In this approach, information is sent in pa rallel using two or more transmitters that are spatially separated, with each beam therefore experiencing different atmospheric turbulence, lowering the probability of a receive error. In this work we propose and experimentally demonstrate a generalization of diversity based on spatial modes of light, which we have termed $textit{modal diversity}$. We remove the need for a physical separation of the transmitters by exploiting the fact that spatial modes of light experience different perturbations, even when travelling along the same path. For this proof-of-principle we selected modes from the Hermite-Gaussian and Laguerre-Gaussian basis sets and demonstrate an improvement in Bit Error Rate by up to 54%. We outline that modal diversity enables physically compact and longer distance free space optical links without increasing the total transmit power.
In this paper, the bit error rate (BER) performance of spatial modulation (SM) systems is investigated both theoretically and by simulation in a non-stationary Kronecker-based massive multiple-input-multiple-output (MIMO) channel model in multi-user (MU) scenarios. Massive MIMO SM systems are considered in this paper using both a time-division multiple access (TDMA) scheme and a block diagonalization (BD) based precoding scheme, for different system settings. Their performance is compared with a vertical Bell labs layered space-time (V-BLAST) architecture based system and a conventional channel inversion system. It is observed that a higher cluster evolution factor can result in better BER performance of SM systems due to the low correlation among sub-channels. Compared with the BD-SM system, the SM system using the TDMA scheme obtains a better BER performance but with a much lower total system data rate. The BD-MU-SM system achieves the best trade-off between the data rate and the BER performance among all of the systems considered. When compared with the V-BLAST system and the channel inversion system, SM approaches offer advantages in performance for MU massive MIMO systems.
Meeting the ever-growing information rate demands has become of utmost importance for optical communication systems. However, it has proven to be a challenging task due to the presence of Kerr effects, which have largely been regarded as a major bott leneck for enhancing the achievable information rates in modern optical communications. In this work, the optimisation and performance of digital nonlinearity compensation are discussed for maximising the achievable information rates in spectrally-efficient optical fibre communication systems. It is found that, for any given target information rate, there exists a trade-off between modulation format and compensated bandwidth to reduce the computational complexity requirement of digital nonlinearity compensation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا