ترغب بنشر مسار تعليمي؟ اضغط هنا

Physics-Based Deep Learning for Fiber-Optic Communication Systems

78   0   0.0 ( 0 )
 نشر من قبل Christian H\\\"ager
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a new machine-learning approach for fiber-optic communication systems whose signal propagation is governed by the nonlinear Schrodinger equation (NLSE). Our main observation is that the popular split-step method (SSM) for numerically solving the NLSE has essentially the same functional form as a deep multi-layer neural network; in both cases, one alternates linear steps and pointwise nonlinearities. We exploit this connection by parameterizing the SSM and viewing the linear steps as general linear functions, similar to the weight matrices in a neural network. The resulting physics-based machine-learning model has several advantages over black-box function approximators. For example, it allows us to examine and interpret the learned solutions in order to understand why they perform well. As an application, low-complexity nonlinear equalization is considered, where the task is to efficiently invert the NLSE. This is commonly referred to as digital backpropagation (DBP). Rather than employing neural networks, the proposed algorithm, dubbed learned DBP (LDBP), uses the physics-based model with trainable filters in each step and its complexity is reduced by progressively pruning filter taps during gradient descent. Our main finding is that the filters can be pruned to remarkably short lengths-as few as 3 taps/step-without sacrificing performance. As a result, the complexity can be reduced by orders of magnitude in comparison to prior work. By inspecting the filter responses, an additional theoretical justification for the learned parameter configurations is provided. Our work illustrates that combining data-driven optimization with existing domain knowledge can generate new insights into old communications problems.



قيم البحث

اقرأ أيضاً

In this paper, an unsupervised machine learning method for geometric constellation shaping is investigated. By embedding a differentiable fiber channel model within two neural networks, the learning algorithm is optimizing for a geometric constellati on shape. The learned constellations yield improved performance to state-of-the-art geometrically shaped constellations, and include an implicit trade-off between amplification noise and nonlinear effects. Further, the method allows joint optimization of system parameters, such as the optimal launch power, simultaneously with the constellation shape. An experimental demonstration validates the findings. Improved performances are reported, up to 0.13 bit/4D in simulation and experimentally up to 0.12 bit/4D.
82 - Wei Chen , Bowen Zhang , Shi Jin 2020
Sparse signal recovery problems from noisy linear measurements appear in many areas of wireless communications. In recent years, deep learning (DL) based approaches have attracted interests of researchers to solve the sparse linear inverse problem by unfolding iterative algorithms as neural networks. Typically, research concerning DL assume a fixed number of network layers. However, it ignores a key character in traditional iterative algorithms, where the number of iterations required for convergence changes with varying sparsity levels. By investigating on the projected gradient descent, we unveil the drawbacks of the existing DL methods with fixed depth. Then we propose an end-to-end trainable DL architecture, which involves an extra halting score at each layer. Therefore, the proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase. We conduct experiments using both synthetic data and applications including random access in massive MTC and massive MIMO channel estimation, and the results demonstrate the improved efficiency for the proposed approach.
187 - Ke Ma , Dongxuan He , Hancun Sun 2021
Huge overhead of beam training imposes a significant challenge in millimeter-wave (mmWave) wireless communications. To address this issue, in this paper, we propose a wide beam based training approach to calibrate the narrow beam direction according to the channel power leakage. To handle the complex nonlinear properties of the channel power leakage, deep learning is utilized to predict the optimal narrow beam directly. Specifically, three deep learning assisted calibrated beam training schemes are proposed. The first scheme adopts convolution neural network to implement the prediction based on the instantaneous received signals of wide beam training. We also perform the additional narrow beam training based on the predicted probabilities for further beam direction calibrations. However, the first scheme only depends on one wide beam training, which lacks the robustness to noise. To tackle this problem, the second scheme adopts long-short term memory (LSTM) network for tracking the movement of users and calibrating the beam direction according to the received signals of prior beam training, in order to enhance the robustness to noise. To further reduce the overhead of wide beam training, our third scheme, an adaptive beam training strategy, selects partial wide beams to be trained based on the prior received signals. Two criteria, namely, optimal neighboring criterion and maximum probability criterion, are designed for the selection. Furthermore, to handle mobile scenarios, auxiliary LSTM is introduced to calibrate the directions of the selected wide beams more precisely. Simulation results demonstrate that our proposed schemes achieve significantly higher beamforming gain with smaller beam training overhead compared with the conventional and existing deep-learning based counterparts.
61 - Jiaqi Xu , Yuanwei Liu 2020
The reconfigurable intelligent surface (RIS) is one of the promising technologies contributing to the next generation smart radio environment. A novel physics-based RIS channel model is proposed. Particularly, we consider the RIS and the scattering e nvironment as a whole by studying the signals multipath propagation, as well as the radiation pattern of the RIS. The model suggests that the RIS-assisted wireless channel can be approximated by a Rician distribution. Analytical expressions are derived for the shape factor and the scale factor of the distribution. For the case of continuous phase shifts, the distribution depends on the number of elements of the RIS and the observing direction of the receiver. For the case of continuous phase shifts, the distribution further depends on the quantization level of the RIS phase error. The scaling law of the average received power is obtained from the scale factor of the distribution. For the application scenarios where RIS functions as an anomalous reflector, we investigate the performance of single RIS-assisted multiple access networks for time-division multiple access (TDMA), frequency-division multiple access (FDMA) and non-orthogonal multiple access (NOMA). Closed-form expressions for the outage probability of the proposed channel model are derived. It is proved that a constant diversity order exists, which is independent of the number of RIS elements. Simulation results are presented to confirm that the proposed model applies effectively to the phased-array implemented RISs.
We use white Gaussian noise as a test signal for single-mode and multimode transmission links and estimate the link capacity based on a calculation of mutual information. We also extract the complex amplitude channel estimations and mode-dependent loss with high accuracy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا