No Arabic abstract
We propose a deep-learning approach for the joint MIMO detection and channel decoding problem. Conventional MIMO receivers adopt a model-based approach for MIMO detection and channel decoding in linear or iterative manners. However, due to the complex MIMO signal model, the optimal solution to the joint MIMO detection and channel decoding problem (i.e., the maximum likelihood decoding of the transmitted codewords from the received MIMO signals) is computationally infeasible. As a practical measure, the current model-based MIMO receivers all use suboptimal MIMO decoding methods with affordable computational complexities. This work applies the latest advances in deep learning for the design of MIMO receivers. In particular, we leverage deep neural networks (DNN) with supervised training to solve the joint MIMO detection and channel decoding problem. We show that DNN can be trained to give much better decoding performance than conventional MIMO receivers do. Our simulations show that a DNN implementation consisting of seven hidden layers can outperform conventional model-based linear or iterative receivers. This performance improvement points to a new direction for future MIMO receiver design.
We revisit the idea of using deep neural networks for one-shot decoding of random and structured codes, such as polar codes. Although it is possible to achieve maximum a posteriori (MAP) bit error rate (BER) performance for both code families and for short codeword lengths, we observe that (i) structured codes are easier to learn and (ii) the neural network is able to generalize to codewords that it has never seen during training for structured, but not for random codes. These results provide some evidence that neural networks can learn a form of decoding algorithm, rather than only a simple classifier. We introduce the metric normalized validation error (NVE) in order to further investigate the potential and limitations of deep learning-based decoding with respect to performance and complexity.
In this paper, we address the message-passing receiver design for the 3D massive MIMO-OFDM systems. With the aid of the central limit argument and Taylor-series approximation, a computationally efficient receiver that performs joint channel estimation and decoding is devised by the framework of expectation propagation. Specially, the local belief defined at the channel transition function is expanded up to the second order with Wirtinger calculus, to transform the messages sent by the channel transition function to a tractable form. As a result, the channel impulse response (CIR) between each pair of antennas is estimated by Gaussian message passing. In addition, a variational expectation-maximization (EM)-based method is derived to learn the channel power-delay-profile (PDP). The proposed joint algorithm is assessed in 3D massive MIMO systems with spatially correlated channels, and the empirical results corroborate its superiority in terms of performance and complexity.
Although the sphere decoder (SD) is a powerful detector for multiple-input multiple-output (MIMO) systems, it has become computationally prohibitive in massive MIMO systems, where a large number of antennas are employed. To overcome this challenge, we propose fast deep learning (DL)-aided SD (FDL-SD) and fast DL-aided $K$-best SD (KSD, FDL-KSD) algorithms. Therein, the major application of DL is to generate a highly reliable initial candidate to accelerate the search in SD and KSD in conjunction with candidate/layer ordering and early rejection. Compared to existing DL-aided SD schemes, our proposed schemes are more advantageous in both offline training and online application phases. Specifically, unlike existing DL-aided SD schemes, they do not require performing the conventional SD in the training phase. For a $24 times 24$ MIMO system with QPSK, the proposed FDL-SD achieves a complexity reduction of more than $90%$ without any performance loss compared to conventional SD schemes. For a $32 times 32$ MIMO system with QPSK, the proposed FDL-KSD only requires $K = 32$ to attain the performance of the conventional KSD with $K=256$, where $K$ is the number of survival paths in KSD. This implies a dramatic improvement in the performance--complexity tradeoff of the proposed FDL-KSD scheme.
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized through deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.