ﻻ يوجد ملخص باللغة العربية
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized through deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.
In this paper, we investigate the model-driven deep learning (DL) for MIMO detection. In particular, the MIMO detector is specially designed by unfolding an iterative algorithm and adding some trainable parameters. Since the number of trainable param
We propose a deep-learning approach for the joint MIMO detection and channel decoding problem. Conventional MIMO receivers adopt a model-based approach for MIMO detection and channel decoding in linear or iterative manners. However, due to the comple
In this paper, an efficient massive multiple-input multiple-output (MIMO) detector is proposed by employing a deep neural network (DNN). Specifically, we first unfold an existing iterative detection algorithm into the DNN structure, such that the det
Massive multiuser multiple-input multiple-output (MU-MIMO) has been the mainstream technology in fifth-generation wireless systems. To reduce high hardware costs and power consumption in massive MU-MIMO, low-resolution digital-to-analog converters (D
Lattice reduction (LR) is a preprocessing technique for multiple-input multiple-output (MIMO) symbol detection to achieve better bit error-rate (BER) performance. In this paper, we propose a customized homogeneous multiprocessor for LR. The processor