Do you want to publish a course? Click here

Machine Learning-based Signal Detection for PMH Signals in Load-modulated MIMO System

89   0   0.0 ( 0 )
 Added by Jinle Zhu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Phase Modulation on the Hypersphere (PMH) is a power efficient modulation scheme for the textit{load-modulated} multiple-input multiple-output (MIMO) transmitters with central power amplifiers (CPA). However, it is difficult to obtain the precise channel state information (CSI), and the traditional optimal maximum likelihood (ML) detection scheme incurs high complexity which increases exponentially with the number of antennas and the number of bits carried per antenna in the PMH modulation. To detect the PMH signals without knowing the prior CSI, we first propose a signal detection scheme, termed as the hypersphere clustering scheme based on the expectation maximization (EM) algorithm with maximum likelihood detection (HEM-ML). By leveraging machine learning, the proposed detection scheme can accurately obtain information of the channel from a few of the received symbols with little resource cost and achieve comparable detection results as that of the optimal ML detector. To further reduce the computational complexity in the ML detection in HEM-ML, we also propose the second signal detection scheme, termed as the hypersphere clustering scheme based on the EM algorithm with KD-tree detection (HEM-KD). The CSI obtained from the EM algorithm is used to build a spatial KD-tree receiver codebook and the signal detection problem can be transformed into a nearest neighbor search (NNS) problem. The detection complexity of HEM-KD is significantly reduced without any detection performance loss as compared to HEM-ML. Extensive simulation results verify the effectiveness of our proposed detection schemes.



rate research

Read More

In this paper, we investigate the model-driven deep learning (DL) for MIMO detection. In particular, the MIMO detector is specially designed by unfolding an iterative algorithm and adding some trainable parameters. Since the number of trainable parameters is much fewer than the data-driven DL based signal detector, the model-driven DL based MIMO detector can be rapidly trained with a much smaller data set. The proposed MIMO detector can be extended to soft-input soft-output detection easily. Furthermore, we investigate joint MIMO channel estimation and signal detection (JCESD), where the detector takes channel estimation error and channel statistics into consideration while channel estimation is refined by detected data and considers the detection error. Based on numerical results, the model-driven DL based MIMO detector significantly improves the performance of corresponding traditional iterative detector, outperforms other DL-based MIMO detectors and exhibits superior robustness to various mismatches.
Channel estimation and signal detection are essential steps to ensure the quality of end-to-end communication in orthogonal frequency-division multiplexing (OFDM) systems. In this paper, we develop a DDLSD approach, i.e., Data-driven Deep Learning for Signal Detection in OFDM systems. First, the OFDM system model is established. Then, the long short-term memory (LSTM) is introduced into the OFDM system model. Wireless channel data is generated through simulation, the preprocessed time series feature information is input into the LSTM to complete the offline training. Finally, the trained model is used for online recovery of transmitted signal. The difference between this scheme and existing OFDM receiver is that explicit estimated channel state information (CSI) is transformed into invisible estimated CSI, and the transmit symbol is directly restored. Simulation results show that the DDLSD scheme outperforms the existing traditional methods in terms of improving channel estimation and signal detection performance.
88 - Fei Wen , Lei Chu , Peilin Liu 2018
In the past decade, sparse and low-rank recovery have drawn much attention in many areas such as signal/image processing, statistics, bioinformatics and machine learning. To achieve sparsity and/or low-rankness inducing, the $ell_1$ norm and nuclear norm are of the most popular regularization penalties due to their convexity. While the $ell_1$ and nuclear norm are convenient as the related convex optimization problems are usually tractable, it has been shown in many applications that a nonconvex penalty can yield significantly better performance. In recent, nonconvex regularization based sparse and low-rank recovery is of considerable interest and it in fact is a main driver of the recent progress in nonconvex and nonsmooth optimization. This paper gives an overview of this topic in various fields in signal processing, statistics and machine learning, including compressive sensing (CS), sparse regression and variable selection, sparse signals separation, sparse principal component analysis (PCA), large covariance and inverse covariance matrices estimation, matrix completion, and robust PCA. We present recent developments of nonconvex regularization based sparse and low-rank recovery in these fields, addressing the issues of penalty selection, applications and the convergence of nonconvex algorithms. Code is available at https://github.com/FWen/ncreg.git.
Massive multiple-input multiple-output (MIMO) is one of the key techniques to achieve better spectrum and energy efficiency in 5G system. The channel state information (CSI) needs to be fed back from the user equipment to the base station in frequency division duplexing (FDD) mode. However, the overhead of the direct feedback is unacceptable due to the large antenna array in massive MIMO system. Recently, deep learning is widely adopted to the compressed CSI feedback task and proved to be effective. In this paper, a novel network named aggregated channel reconstruction network (ACRNet) is designed to boost the feedback performance with network aggregation and parametric rectified linear unit (PReLU) activation. The practical deployment of the feedback network in the communication system is also considered. Specifically, the elastic feedback scheme is proposed to flexibly adapt the network to meet different resource limitations. Besides, the network binarization technique is combined with the feature quantization for lightweight and practical deployment. Experiments show that the proposed ACRNet outperforms loads of previous state-of-the-art networks, providing a neat feedback solution with high performance, low cost and impressive flexibility.
In a time-varying massive multiple-input multipleoutput (MIMO) system, the acquisition of the downlink channel state information at the base station (BS) is a very challenging task due to the prohibitively high overheads associated with downlink training and uplink feedback. In this paper, we consider the hybrid precoding structure at BS and examine the antennatime domain channel extrapolation. We design a latent ordinary differential equation (ODE)-based network under the variational auto-encoder (VAE) framework to learn the mapping function from the partial uplink channels to the full downlink ones at the BS side. Specifically, the gated recurrent unit is adopted for the encoder and the fully-connected neural network is used for the decoder. The end-to-end learning is utilized to optimize the network parameters. Simulation results show that the designed network can efficiently infer the full downlink channels from the partial uplink ones, which can significantly reduce the channel training overhead.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا