Do you want to publish a course? Click here

A Model-Driven Deep Learning Network for MIMO Detection

146   0   0.0 ( 0 )
 Added by Hengtao He
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized through deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.



rate research

Read More

In this paper, we investigate the model-driven deep learning (DL) for MIMO detection. In particular, the MIMO detector is specially designed by unfolding an iterative algorithm and adding some trainable parameters. Since the number of trainable parameters is much fewer than the data-driven DL based signal detector, the model-driven DL based MIMO detector can be rapidly trained with a much smaller data set. The proposed MIMO detector can be extended to soft-input soft-output detection easily. Furthermore, we investigate joint MIMO channel estimation and signal detection (JCESD), where the detector takes channel estimation error and channel statistics into consideration while channel estimation is refined by detected data and considers the detection error. Based on numerical results, the model-driven DL based MIMO detector significantly improves the performance of corresponding traditional iterative detector, outperforms other DL-based MIMO detectors and exhibits superior robustness to various mismatches.
We propose a deep-learning approach for the joint MIMO detection and channel decoding problem. Conventional MIMO receivers adopt a model-based approach for MIMO detection and channel decoding in linear or iterative manners. However, due to the complex MIMO signal model, the optimal solution to the joint MIMO detection and channel decoding problem (i.e., the maximum likelihood decoding of the transmitted codewords from the received MIMO signals) is computationally infeasible. As a practical measure, the current model-based MIMO receivers all use suboptimal MIMO decoding methods with affordable computational complexities. This work applies the latest advances in deep learning for the design of MIMO receivers. In particular, we leverage deep neural networks (DNN) with supervised training to solve the joint MIMO detection and channel decoding problem. We show that DNN can be trained to give much better decoding performance than conventional MIMO receivers do. Our simulations show that a DNN implementation consisting of seven hidden layers can outperform conventional model-based linear or iterative receivers. This performance improvement points to a new direction for future MIMO receiver design.
In this paper, an efficient massive multiple-input multiple-output (MIMO) detector is proposed by employing a deep neural network (DNN). Specifically, we first unfold an existing iterative detection algorithm into the DNN structure, such that the detection task can be implemented by deep learning (DL) approach. We then introduce two auxiliary parameters at each layer to better cancel multiuser interference (MUI). The first parameter is to generate the residual error vector while the second one is to adjust the relationship among previous layers. We further design the training procedure to optimize the auxiliary parameters with pre-processed inputs. The so derived MIMO detector falls into the category of model-driven DL. The simulation results show that the proposed MIMO detector can achieve preferable detection performance compared to the existing detectors for massive MIMO systems.
Massive multiuser multiple-input multiple-output (MU-MIMO) has been the mainstream technology in fifth-generation wireless systems. To reduce high hardware costs and power consumption in massive MU-MIMO, low-resolution digital-to-analog converters (DAC) for each antenna and radio frequency (RF) chain in downlink transmission is used, which brings challenges for precoding design. To circumvent these obstacles, we develop a model-driven deep learning (DL) network for massive MU-MIMO with finite-alphabet precoding in this article. The architecture of the network is specially designed by unfolding an iterative algorithm. Compared with the traditional state-of-the-art techniques, the proposed DL-based precoder shows significant advantages in performance, complexity, and robustness to channel estimation error under Rayleigh fading channel.
Lattice reduction (LR) is a preprocessing technique for multiple-input multiple-output (MIMO) symbol detection to achieve better bit error-rate (BER) performance. In this paper, we propose a customized homogeneous multiprocessor for LR. The processor cores are based on transport triggered architecture (TTA). We propose some modification of the popular LR algorithm, Lenstra-Lenstra-Lovasz (LLL) for high throughput. The TTA cores are programmed with high level language. Each TTA core consists of several special function units to accelerate the program code. The multiprocessor takes 187 cycles to reduce a single matrix for LR. The architecture is synthesized on 90 nm technology and takes 405 kgates at 210 MHz.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا