ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Learning-based Channel Estimation for Beamspace mmWave Massive MIMO Systems

113   0   0.0 ( 0 )
 نشر من قبل Hengtao He
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.



قيم البحث

اقرأ أيضاً

Millimeter-wave massive MIMO with lens antenna array can considerably reduce the number of required radio-frequency (RF) chains by beam selection. However, beam selection requires the base station to acquire the accurate information of beamspace chan nel. This is a challenging task, as the size of beamspace channel is large while the number of RF chains is limited. In this paper, we investigate the beamspace channel estimation problem in mmWave massive MIMO systems with lens antenna array. Specifically, we first design an adaptive selecting network for mmWave massive MIMO systems with lens antenna array, and based on this network, we further formulate the beamspace channel estimation problem as a sparse signal recovery problem. Then, by fully utilizing the structural characteristics of mmWave beamspace channel, we propose a support detection (SD)-based channel estimation scheme with reliable performance and low pilot overhead. Finally, the performance and complexity analyses are provided to prove that the proposed SD-based channel estimation scheme can estimate the support of sparse beamspace channel with comparable or higher accuracy than conventional schemes. Simulation results verify that the proposed SD-based channel estimation scheme outperforms conventional schemes and enjoys satisfying accuracy, even in the low SNR region as the structural characteristics of beamspace channel can be exploited.
In a time-varying massive multiple-input multipleoutput (MIMO) system, the acquisition of the downlink channel state information at the base station (BS) is a very challenging task due to the prohibitively high overheads associated with downlink trai ning and uplink feedback. In this paper, we consider the hybrid precoding structure at BS and examine the antennatime domain channel extrapolation. We design a latent ordinary differential equation (ODE)-based network under the variational auto-encoder (VAE) framework to learn the mapping function from the partial uplink channels to the full downlink ones at the BS side. Specifically, the gated recurrent unit is adopted for the encoder and the fully-connected neural network is used for the decoder. The end-to-end learning is utilized to optimize the network parameters. Simulation results show that the designed network can efficiently infer the full downlink channels from the partial uplink ones, which can significantly reduce the channel training overhead.
The problem of wideband massive MIMO channel estimation is considered. Targeting for low complexity algorithms as well as small training overhead, a compressive sensing (CS) approach is pursued. Unfortunately, due to the Kronecker-type sensing (measu rement) matrix corresponding to this setup, application of standard CS algorithms and analysis methodology does not apply. By recognizing that the channel possesses a special structure, termed hierarchical sparsity, we propose an efficient algorithm that explicitly takes into account this property. In addition, by extending the standard CS analysis methodology to hierarchical sparse vectors, we provide a rigorous analysis of the algorithm performance in terms of estimation error as well as number of pilot subcarriers required to achieve it. Small training overhead, in turn, means higher number of supported users in a cell and potentially improved pilot decontamination. We believe, that this is the first paper that draws a rigorous connection between the hierarchical framework and Kronecker measurements. Numerical results verify the advantage of employing the proposed approach in this setting instead of standard CS algorithms.
125 - Jiabao Gao , Mu Hu , Caijun Zhong 2021
Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the divide-and-conquer policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.
We consider the problem of channel estimation for uplink multiuser massive MIMO systems, where, in order to significantly reduce the hardware cost and power consumption, one-bit analog-to-digital converters (ADCs) are used at the base station (BS) to quantize the received signal. Channel estimation for one-bit massive MIMO systems is challenging due to the severe distortion caused by the coarse quantization. It was shown in previous studies that an extremely long training sequence is required to attain an acceptable performance. In this paper, we study the problem of optimal one-bit quantization design for channel estimation in one-bit massive MIMO systems. Our analysis reveals that, if the quantization thresholds are optimally devised, using one-bit ADCs can achieve an estimation error close to (with an increase by a factor of $pi/2$) that of an ideal estimator which has access to the unquantized data. The optimal quantization thresholds, however, are dependent on the unknown channel parameters. To cope with this difficulty, we propose an adaptive quantization (AQ) approach in which the thresholds are adaptively adjusted in a way such that the thresholds converge to the optimal thresholds, and a random quantization (RQ) scheme which randomly generate a set of nonidentical thresholds based on some statistical prior knowledge of the channel. Simulation results show that, our proposed AQ and RQ schemes, owing to their wisely devised thresholds, present a significant performance improvement over the conventional fixed quantization scheme that uses a fixed (typically zero) threshold, and meanwhile achieve a substantial training overhead reduction for channel estimation. In particular, even with a moderate number of pilot symbols (about 5 times the number of users), the AQ scheme can provide an achievable rate close to that of the perfect channel state information (CSI) case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا