ترغب بنشر مسار تعليمي؟ اضغط هنا

A Machine Learning-Based Detection Technique for Optical Fiber Nonlinearity Mitigation

161   0   0.0 ( 0 )
 نشر من قبل Abdelkerim Amari
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the performance of a machine learning classification technique, called the Parzen window, to mitigate the fiber nonlinearity in the context of dispersion managed and dispersion unmanaged systems. The technique is applied for detection at the receiver side, and deals with the non-Gaussian nonlinear effects by designing improved decision boundaries. We also propose a two-stage mitigation technique using digital back propagation and Parzen window for dispersion unmanaged systems. In this case, digital back propagation compensates for the deterministic nonlinearity and the Parzen window deals with the stochastic nonlinear signal-noise interactions, which are not taken into account by digital back propagation. A performance improvement up to 0:4 dB in terms of Q factor is observed.

قيم البحث

اقرأ أيضاً

Fiber Kerr nonlinearity is a fundamental limitation to the achievable capacity of long-distance optical fiber communication. Digital back-propagation (DBP) is a primary methodology to mitigate both linear and nonlinear impairments by solving the inve rse-propagating nonlinear Schrodinger equation (NLSE), which requires detailed link information. Recently, the paradigms based on neural network (NN) were proposed to mitigate nonlinear transmission impairments in optical communication systems. However, almost all neural network-based equalization schemes yield high computation complexity, which prevents the practical implementation in commercial transmission systems. In this paper, we propose a center-oriented long short-term memory network (Co-LSTM) incorporating a simplified mode with a recycling mechanism in the equalization operation, which can mitigate fiber nonlinearity in coherent optical communication systems with ultralow complexity. To validate the proposed methodology, we carry out an experiment of ten-channel wavelength division multiplexing (WDM) transmission with 64 Gbaud polarization-division-multiplexed 16-ary quadrature amplitude modulation (16-QAM) signals. Co-LSTM and DBP achieve a comparable performance of nonlinear mitigation. However, the complexity of Co-LSTM with a simplified mode is almost independent of the transmission distance, which is much lower than that of the DBP. The proposed Co-LSTM methodology presents an attractive approach for low complexity nonlinearity mitigation with neural networks.
Machine learning techniques have recently received significant attention as promising approaches to deal with the optical channel impairments, and in particular, the nonlinear effects. In this work, a machine learning-based classification technique, known as the Parzen window (PW) classifier, is applied to mitigate the nonlinear effects in the optical channel. The PW classifier is used as a detector with improved nonlinear decision boundaries more adapted to the nonlinear fiber channel. Performance improvement is observed when applying the PW in the context of dispersion managed and dispersion unmanaged systems.
The diagnosis of heart diseases is a difficult task generally addressed by an appropriate examination of patients clinical data. Recently, the use of heart rate variability (HRV) analysis as well as of some machine learning algorithms, has proved to be a valuable support in the diagnosis process. However, till now, ischemic heart disease (IHD) has been diagnosed on the basis of Artificial Neural Networks (ANN) applied only to signs, symptoms and sequential ECG and coronary angiography, an invasive tool, while could be probably identified in a non-invasive way by using parameters extracted from HRV, a signal easily obtained from the ECG. In this study, 18 non-invasive features (age, gender, left ventricular ejection fraction and 15 obtained from HRV) of 243 subjects (156 normal subjects and 87 IHD patients) were used to train and validate a series of several ANN, different for number of input and hidden nodes. The best result was obtained using 7 input parameters and 7 hidden nodes with an accuracy of 98.9% and 82% for the training and validation dataset, respectively.
The design of symbol detectors in digital communication systems has traditionally relied on statistical channel models that describe the relation between the transmitted symbols and the observed signal at the receiver. Here we review a data-driven fr amework to symbol detection design which combines machine learning (ML) and model-based algorithms. In this hybrid approach, well-known channel-model-based algorithms such as the Viterbi method, BCJR detection, and multiple-input multiple-output (MIMO) soft interference cancellation (SIC) are augmented with ML-based algorithms to remove their channel-model-dependence, allowing the receiver to learn to implement these algorithms solely from data. The resulting data-driven receivers are most suitable for systems where the underlying channel models are poorly understood, highly complex, or do not well-capture the underlying physics. Our approach is unique in that it only replaces the channel-model-based computations with dedicated neural networks that can be trained from a small amount of data, while keeping the general algorithm intact. Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship and in the presence of channel state information uncertainty.
As data generation increasingly takes place on devices without a wired connection, Machine Learning over wireless networks becomes critical. Many studies have shown that traditional wireless protocols are highly inefficient or unsustainable to suppor t Distributed Machine Learning. This is creating the need for new wireless communication methods. In this survey, we give an exhaustive review of the state of the art wireless methods that are specifically designed to support Machine Learning services. Namely, over-the-air computation and radio resource allocation optimized for Machine Learning. In the over-the-air approach, multiple devices communicate simultaneously over the same time slot and frequency band to exploit the superposition property of wireless channels for gradient averaging over-the-air. In radio resource allocation optimized for Machine Learning, Active Learning metrics allow for data evaluation to greatly optimize the assignment of radio resources. This paper gives a comprehensive introduction to these methods, reviews the most important works, and highlights crucial open problems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا