Do you want to publish a course? Click here

Massive MIMO Channel Estimation with an Untrained Deep Neural Network

82   0   0.0 ( 0 )
 Added by Eren Balevi
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper proposes a deep learning-based channel estimation method for multi-cell interference-limited massive MIMO systems, in which base stations equipped with a large number of antennas serve multiple single-antenna users. The proposed estimator employs a specially designed deep neural network (DNN) to first denoise the received signal, followed by a conventional least-squares (LS) estimation. We analytically prove that our LS-type deep channel estimator can approach minimum mean square error (MMSE) estimator performance for high-dimensional signals, while avoiding MMSEs requirement for complex channel



rate research

Read More

Channel estimation and beamforming play critical roles in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. However, these two modules have been treated as two stand-alone components, which makes it difficult to achieve a global system optimality. In this paper, we propose a deep learning-based approach that directly optimizes the beamformers at the base station according to the received uplink pilots, thereby, bypassing the explicit channel estimation. Different from the existing fully data-driven approach where all the modules are replaced by deep neural networks (DNNs), a neural calibration method is proposed to improve the scalability of the end-to-end design. In particular, the backbone of conventional time-efficient algorithms, i.e., the least-squares (LS) channel estimator and the zero-forcing (ZF) beamformer, is preserved and DNNs are leveraged to calibrate their inputs for better performance. The permutation equivariance property of the formulated resource allocation problem is then identified to design a low-complexity neural network architecture. Simulation results will show the superiority of the proposed neural calibration method over benchmark schemes in terms of both the spectral efficiency and scalability in large-scale wireless networks.
81 - Jisheng Dai , An Liu , 2017
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
Channel estimation and hybrid precoding are considered for multi-user millimeter wave massive multi-input multi-output system. A deep learning compressed sensing (DLCS) channel estimation scheme is proposed. The channel estimation neural network for the DLCS scheme is trained offline using simulated environments to predict the beamspace channel amplitude. Then the channel is reconstructed based on the obtained indices of dominant beamspace channel entries. A deep learning quantized phase (DLQP) hybrid precoder design method is developed after channel estimation. The training hybrid precoding neural network for the DLQP method is obtained offline considering the approximate phase quantization. Then the deployment hybrid precoding neural network (DHPNN) is obtained by replacing the approximate phase quantization with ideal phase quantization and the output of the DHPNN is the analog precoding vector. Finally, the analog precoding matrix is obtained by stacking the analog precoding vectors and the digital precoding matrix is calculated by zero-forcing. Simulation results demonstrate that the DLCS channel estimation scheme outperforms the existing schemes in terms of the normalized mean-squared error and the spectral efficiency, while the DLQP hybrid precoder design method has better spectral efficiency performance than other methods with low phase shifter resolution.
125 - Jiabao Gao , Mu Hu , Caijun Zhong 2021
Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the divide-and-conquer policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.
89 - Jisheng Dai , An Liu , 2018
This paper addresses the problem of joint downlink channel estimation and user grouping in massive multiple-input multiple-output (MIMO) systems, where the motivation comes from the fact that the channel estimation performance can be improved if we exploit additional common sparsity among nearby users. In the literature, a commonly used group sparsity model assumes that users in each group share a uniform sparsity pattern. In practice, however, this oversimplified assumption usually fails to hold, even for physically close users. Outliers deviated from the uniform sparsity pattern in each group may significantly degrade the effectiveness of common sparsity, and hence bring limited (or negative) gain for channel estimation. To better capture the group sparse structure in practice, we provide a general model having two sparsity components: commonly shared sparsity and individual sparsity, where the additional individual sparsity accounts for any outliers. Then, we propose a novel sparse Bayesian learning (SBL)-based framework to address the joint channel estimation and user grouping problem under the general sparsity model. The framework can fully exploit the common sparsity among nearby users and exclude the harmful effect from outliers simultaneously. Simulation results reveal substantial performance gains over the existing state-of-the-art baselines.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا