ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Calibration for Scalable Beamforming in FDD Massive MIMO with Implicit Channel Estimation

85   0   0.0 ( 0 )
 نشر من قبل Yifan Ma
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Channel estimation and beamforming play critical roles in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. However, these two modules have been treated as two stand-alone components, which makes it difficult to achieve a global system optimality. In this paper, we propose a deep learning-based approach that directly optimizes the beamformers at the base station according to the received uplink pilots, thereby, bypassing the explicit channel estimation. Different from the existing fully data-driven approach where all the modules are replaced by deep neural networks (DNNs), a neural calibration method is proposed to improve the scalability of the end-to-end design. In particular, the backbone of conventional time-efficient algorithms, i.e., the least-squares (LS) channel estimator and the zero-forcing (ZF) beamformer, is preserved and DNNs are leveraged to calibrate their inputs for better performance. The permutation equivariance property of the formulated resource allocation problem is then identified to design a low-complexity neural network architecture. Simulation results will show the superiority of the proposed neural calibration method over benchmark schemes in terms of both the spectral efficiency and scalability in large-scale wireless networks.



قيم البحث

اقرأ أيضاً

81 - Jisheng Dai , An Liu , 2017
This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.
We propose a novel randomized channel sparsifying hybrid precoding (RCSHP) design to reduce the signaling overhead of channel estimation and the hardware cost and power consumption at the base station (BS), in order to fully harvest benefits of frequ ency division duplex (FDD) massive multiple-input multiple-output (MIMO) systems. RCSHP allows time-sharing among multiple analog precoders, each serving a compatible user group. The analog precoder is adapted to the channel statistics to properly sparsify the channel for the associated user group, such that the resulting effective channel (product of channel and analog precoder) not only has enough spatial degrees of freedom (DoF) to serve this group of users, but also can be accurately estimated under the limited pilot budget. The digital precoder is adapted to the effective channel based on the duality theory to facilitate the power allocation and exploit the spatial multiplexing gain. We formulate the joint optimization of the time-sharing factors and the associated sets of analog precoders and power allocations as a general utility optimization problem, which considers the impact of effective channel estimation error on the system performance. Then we propose an efficient stochastic successive convex approximation algorithm to provably obtain Karush-Kuhn-Tucker (KKT) points of this problem.
This paper proposes a deep learning-based channel estimation method for multi-cell interference-limited massive MIMO systems, in which base stations equipped with a large number of antennas serve multiple single-antenna users. The proposed estimator employs a specially designed deep neural network (DNN) to first denoise the received signal, followed by a conventional least-squares (LS) estimation. We analytically prove that our LS-type deep channel estimator can approach minimum mean square error (MMSE) estimator performance for high-dimensional signals, while avoiding MMSEs requirement for complex channel
89 - Jisheng Dai , An Liu , 2018
This paper addresses the problem of joint downlink channel estimation and user grouping in massive multiple-input multiple-output (MIMO) systems, where the motivation comes from the fact that the channel estimation performance can be improved if we e xploit additional common sparsity among nearby users. In the literature, a commonly used group sparsity model assumes that users in each group share a uniform sparsity pattern. In practice, however, this oversimplified assumption usually fails to hold, even for physically close users. Outliers deviated from the uniform sparsity pattern in each group may significantly degrade the effectiveness of common sparsity, and hence bring limited (or negative) gain for channel estimation. To better capture the group sparse structure in practice, we provide a general model having two sparsity components: commonly shared sparsity and individual sparsity, where the additional individual sparsity accounts for any outliers. Then, we propose a novel sparse Bayesian learning (SBL)-based framework to address the joint channel estimation and user grouping problem under the general sparsity model. The framework can fully exploit the common sparsity among nearby users and exclude the harmful effect from outliers simultaneously. Simulation results reveal substantial performance gains over the existing state-of-the-art baselines.
In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO), deep learning (DL)-based superimposed channel state information (CSI) feedback has presented promising performance. However, it is still facing many challenges, such as the high complexity of parameter tuning, large number of training parameters, and long training time, etc. To overcome these challenges, an extreme learning machine (ELM)-based superimposed CSI feedback is proposed in this paper, in which the downlink CSI is spread and then superimposed on uplink user data sequence (UL-US) to feed back to base station (BS). At the BS, an ELM-based network is constructed to recover both downlink CSI and UL-US. In the constructed ELM-based network, we employ the simplifi

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا