ترغب بنشر مسار تعليمي؟ اضغط هنا

Model-Based Machine Learning for Joint Digital Backpropagation and PMD Compensation

114   0   0.0 ( 0 )
 نشر من قبل Christian H\\\"ager
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a model-based machine-learning approach for dual-polarization systems by parameterizing the split-step Fourier method for the Manakov-PMD equation. The resulting method combines hardware-friendly time-domain nonlinearity mitigation via the recently proposed learned digital backpropagation (LDBP) with distributed compensation of polarization-mode dispersion (PMD). We refer to the resulting approach as LDBP-PMD. We train LDBP-PMD on multiple PMD realizations and show that it converges within 1% of its peak dB performance after 428 training iterations on average, yielding a peak effective signal-to-noise ratio of only 0.30 dB below the PMD-free case. Similar to state-of-the-art lumped PMD compensation algorithms in practical systems, our approach does not assume any knowledge about the particular PMD realization along the link, nor any knowledge about the total accumulated PMD. This is a significant improvement compared to prior work on distributed PMD compensation, where knowledge about the accumulated PMD is typically assumed. We also compare different parameterization choices in terms of performance, complexity, and convergence behavior. Lastly, we demonstrate that the learned models can be successfully retrained after an abrupt change of the PMD realization along the fiber.



قيم البحث

اقرأ أيضاً

107 - Feng Shu , Lin Liu , Yumeng Zhang 2019
As a green and secure wireless transmission way, secure spatial modulation (SM) is becoming a hot research area. Its basic idea is to exploit both the index of activated transmit antenna and amplitude phase modulation (APM) signal to carry messages, improve security, and save energy. In this paper, we reviewed its crucial techniques: transmit antenna selection (TAS), artificial noise (AN) projection, power allocation (PA), and joint detection at desired receiver. To achieve the optimal performance of maximum likelihood (ML) detector, a deep-neural-network (DNN) joint detector is proposed to jointly infer the index of transmit antenna and signal constellation point with a lower-complexity. Here, each layer of DNN is redesigned to optimize the joint inference performance of two distinct types of information: transmit antenna index and signal constellation point. Simulation results show that the proposed DNN method performs 3dB better than the conventional DNN structure and is close to ML detection in the low and medium signal-to-noise ratio regions in terms of the bit error rate (BER) performance, but its complexity is far lower-complexity compared to ML. Finally, three key techniques TAS, PA, and AN projection at transmitter can be combined to make SM a true secure modulation.
The design of symbol detectors in digital communication systems has traditionally relied on statistical channel models that describe the relation between the transmitted symbols and the observed signal at the receiver. Here we review a data-driven fr amework to symbol detection design which combines machine learning (ML) and model-based algorithms. In this hybrid approach, well-known channel-model-based algorithms such as the Viterbi method, BCJR detection, and multiple-input multiple-output (MIMO) soft interference cancellation (SIC) are augmented with ML-based algorithms to remove their channel-model-dependence, allowing the receiver to learn to implement these algorithms solely from data. The resulting data-driven receivers are most suitable for systems where the underlying channel models are poorly understood, highly complex, or do not well-capture the underlying physics. Our approach is unique in that it only replaces the channel-model-based computations with dedicated neural networks that can be trained from a small amount of data, while keeping the general algorithm intact. Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship and in the presence of channel state information uncertainty.
We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventio nal deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.
In this paper, the performance of adaptive turbo equalization for nonlinearity compensation (NLC) is investigated. A turbo equalization scheme is proposed where a recursive least-squares (RLS) algorithm is used as an adaptive channel estimator to tra ck the time-varying intersymbol interference (ISI) coefficients associated with inter-channel nonlinear interference (NLI) model. The estimated channel coefficients are used by a MIMO 2x2 soft-input soft-output (SISO) linear minimum mean square error (LMMSE) equalizer to compensate for the time-varying ISI. The SISO LMMSE equalizer and the SISO forward error correction (FEC) decoder exchange extrinsic information in every turbo iteration, allowing the receiver to improve the performance of the channel estimation and the equalization, achieving lower bit-error-rate (BER) values. The proposed scheme is investigated for polarization multiplexed 64QAM and 256QAM, although it applies to any proper modulation format. Extensive numerical results are presented. It is shown that the scheme allows up to 0.7 dB extra gain in effectively received signal-to-noise ratio (SNR) and up to 0.2 bits/symbol/pol in generalized mutual information (GMI), on top of the gain provided by single-channel digital backpropagation.
The main objective of this work is to develop a miniaturized, high accuracy, single-turn absolute, rotary encoder called ASTRAS360. Its measurement principle is based on capturing an image that uniquely identifies the rotation angle. To evaluate this angle, the image first has to be classified into its sector based on its color, and only then can the angle be regressed. In-spired by machine learning, we built a calibration setup, able to generate labeled training data automatically. We used these training data to test, characterize, and compare several machine learning algorithms for the classification and the regression. In an additional experiment, we also characterized the tolerance of our rotary encoder to eccentric mounting. Our findings demonstrate that various algorithms can perform these tasks with high accuracy and reliability; furthermore, providing extra-inputs (e.g. rotation direction) allows the machine learning algorithms to compensate for the mechanical imperfections of the rotary encoder.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا