No Arabic abstract
Present-day communication systems routinely use codes that approach the channel capacity when coupled with a computationally efficient decoder. However, the decoder is typically designed for the Gaussian noise channel and is known to be sub-optimal for non-Gaussian noise distribution. Deep learning methods offer a new approach for designing decoders that can be trained and tailored for arbitrary channel statistics. We focus on Turbo codes and propose DeepTurbo, a novel deep learning based architecture for Turbo decoding. The standard Turbo decoder (Turbo) iteratively applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm with an interleaver in the middle. A neural architecture for Turbo decoding termed (NeuralBCJR), was proposed recently. There, the key idea is to create a module that imitates the BCJR algorithm using supervised learning, and to use the interleaver architecture along with this module, which is then fine-tuned using end-to-end training. However, knowledge of the BCJR algorithm is required to design such an architecture, which also constrains the resulting learned decoder. Here we remedy this requirement and propose a fully end-to-end trained neural decoder - Deep Turbo Decoder (DeepTurbo). With novel learnable decoder structure and training methodology, DeepTurbo reveals superior performance under both AWGN and non-AWGN settings as compared to the other two decoders - Turbo and NeuralBCJR. Furthermore, among all the three, DeepTurbo exhibits the lowest error floor.
Turbo codes and CRC codes are usually decoded separately according to the serially concatenated inner codes and outer codes respectively. In this letter, we propose a hybrid decoding algorithm of turbo-CRC codes, where the outer codes, CRC codes, are not used for error detection but as an assistance to improve the error correction performance. Two independent iterative decoding and reliability based decoding are carried out in a hybrid schedule, which can efficiently decode the two different codes as an entire codeword. By introducing an efficient error detecting method based on normalized Euclidean distance without CRC check, significant gain can be obtained by using the hybrid decoding method without loss of the error detection ability.
In this paper, the performance of adaptive turbo equalization for nonlinearity compensation (NLC) is investigated. A turbo equalization scheme is proposed where a recursive least-squares (RLS) algorithm is used as an adaptive channel estimator to track the time-varying intersymbol interference (ISI) coefficients associated with inter-channel nonlinear interference (NLI) model. The estimated channel coefficients are used by a MIMO 2x2 soft-input soft-output (SISO) linear minimum mean square error (LMMSE) equalizer to compensate for the time-varying ISI. The SISO LMMSE equalizer and the SISO forward error correction (FEC) decoder exchange extrinsic information in every turbo iteration, allowing the receiver to improve the performance of the channel estimation and the equalization, achieving lower bit-error-rate (BER) values. The proposed scheme is investigated for polarization multiplexed 64QAM and 256QAM, although it applies to any proper modulation format. Extensive numerical results are presented. It is shown that the scheme allows up to 0.7 dB extra gain in effectively received signal-to-noise ratio (SNR) and up to 0.2 bits/symbol/pol in generalized mutual information (GMI), on top of the gain provided by single-channel digital backpropagation.
Recently, deep learning methods have shown significant improvements in communication systems. In this paper, we study the equalization problem over the nonlinear channel using neural networks. The joint equalizer and decoder based on neural networks are proposed to realize blind equalization and decoding process without the knowledge of channel state information (CSI). Different from previous methods, we use two neural networks instead of one. First, convolutional neural network (CNN) is used to adaptively recover the transmitted signal from channel impairment and nonlinear distortions. Then the deep neural network decoder (NND) decodes the detected signal from CNN equalizer. Under various channel conditions, the experiment results demonstrate that the proposed CNN equalizer achieves better performance than other solutions based on machine learning methods. The proposed model reduces about $2/3$ of the parameters compared to state-of-the-art counterparts. Besides, our model can be easily applied to long sequence with $mathcal{O}(n)$ complexity.
In this paper, we propose a turbo receiver for joint activity detection and data decoding in grant-free massive random access, which iterates between a detector and a belief propagation (BP)-based channel decoder. Specifically, responsible for user activity detection, channel estimation, and soft data symbol detection, the detector is developed based on a bilinear inference problem that exploits the common sparsity pattern in the received pilot and data signals. The bilinear generalized approximate message passing (BiG-AMP) algorithm is adopted to solve the problem using probabilities of the transmitted data symbols estimated by the channel decoder as prior knowledge. In addition, extrinsic information is derived from the detector to improve the channel decoding accuracy of the decoder. Simulation results show significant improvements achieved by the proposed turbo receiver compared with conventional designs.
In the paper we study a deep learning based method to solve the multicell power control problem for sum rate maximization subject to per-user rate constraints and per-base station (BS) power constraints. The core difficulty of this problem is how to ensure that the learned power control results by the deep neural network (DNN) satisfy the per-user rate constraints. To tackle the difficulty, we propose to cascade a projection block after a traditional DNN, which projects the infeasible power control results onto the constraint set. The projection block is designed based on a geometrical interpretation of the constraints, which is of low complexity, meeting the real-time requirement of online applications. Explicit-form expression of the backpropagated gradient is derived for the proposed projection block, with which the DNN can be trained to directly maximize the sum rate via unsupervised learning. We also develop a heuristic implementation of the projection block to reduce the size of DNN. Simulation results demonstrate the advantages of the proposed method over existing deep learning and numerical optimization~methods, and show the robustness of the proposed method with the model mismatch between training and testing~datasets.