No Arabic abstract
Sparse channel estimation for massive multiple-input multiple-output systems has drawn much attention in recent years. The required pilots are substantially reduced when the sparse channel state vectors can be reconstructed from a few numbers of measurements. A popular approach for sparse reconstruction is to solve the least-squares problem with a convex regularization. However, the convex regularizer is either too loose to force sparsity or lead to biased estimation. In this paper, the sparse channel reconstruction is solved by minimizing the least-squares objective with a nonconvex regularizer, which can exactly express the sparsity constraint and avoid introducing serious bias in the solution. A novel algorithm is proposed for solving the resulting nonconvex optimization via the difference of convex functions programming and the gradient projection descent. Simulation results show that the proposed algorithm is fast and accurate, and it outperforms the existing sparse recovery algorithms in terms of reconstruction errors.
Novel sparse reconstruction algorithms are proposed for beamspace channel estimation in massive multiple-input multiple-output systems. The proposed algorithms minimize a least-squares objective having a nonconvex regularizer. This regularizer removes the penalties on a few large-magnitude elements from the conventional l1-norm regularizer, and thus it only forces penalties on the remaining elements that are expected to be zeros. Accurate and fast reconstructions can be achieved by performing gradient projection updates within the framework of difference of convex functions (DC) programming. A double-loop algorithm and a single-loop algorithm are proposed via different DC decompositions, and these two algorithms have distinct computation complexities and convergence rates. Then, an extension algorithm is further proposed by designing the step sizes of the single-loop algorithm. The extension algorithm has a faster convergence rate and can achieve approximately the same level of accuracy as the proposed double-loop algorithm. Numerical results show significant advantages of the proposed algorithms over existing reconstruction algorithms in terms of reconstruction accuracies and runtimes. Compared to the benchmark channel estimation techniques, the proposed algorithms also achieve smaller mean squared error and higher achievable spectral efficiency.
The problem of wideband massive MIMO channel estimation is considered. Targeting for low complexity algorithms as well as small training overhead, a compressive sensing (CS) approach is pursued. Unfortunately, due to the Kronecker-type sensing (measurement) matrix corresponding to this setup, application of standard CS algorithms and analysis methodology does not apply. By recognizing that the channel possesses a special structure, termed hierarchical sparsity, we propose an efficient algorithm that explicitly takes into account this property. In addition, by extending the standard CS analysis methodology to hierarchical sparse vectors, we provide a rigorous analysis of the algorithm performance in terms of estimation error as well as number of pilot subcarriers required to achieve it. Small training overhead, in turn, means higher number of supported users in a cell and potentially improved pilot decontamination. We believe, that this is the first paper that draws a rigorous connection between the hierarchical framework and Kronecker measurements. Numerical results verify the advantage of employing the proposed approach in this setting instead of standard CS algorithms.
We consider the problem of channel estimation for uplink multiuser massive MIMO systems, where, in order to significantly reduce the hardware cost and power consumption, one-bit analog-to-digital converters (ADCs) are used at the base station (BS) to quantize the received signal. Channel estimation for one-bit massive MIMO systems is challenging due to the severe distortion caused by the coarse quantization. It was shown in previous studies that an extremely long training sequence is required to attain an acceptable performance. In this paper, we study the problem of optimal one-bit quantization design for channel estimation in one-bit massive MIMO systems. Our analysis reveals that, if the quantization thresholds are optimally devised, using one-bit ADCs can achieve an estimation error close to (with an increase by a factor of $pi/2$) that of an ideal estimator which has access to the unquantized data. The optimal quantization thresholds, however, are dependent on the unknown channel parameters. To cope with this difficulty, we propose an adaptive quantization (AQ) approach in which the thresholds are adaptively adjusted in a way such that the thresholds converge to the optimal thresholds, and a random quantization (RQ) scheme which randomly generate a set of nonidentical thresholds based on some statistical prior knowledge of the channel. Simulation results show that, our proposed AQ and RQ schemes, owing to their wisely devised thresholds, present a significant performance improvement over the conventional fixed quantization scheme that uses a fixed (typically zero) threshold, and meanwhile achieve a substantial training overhead reduction for channel estimation. In particular, even with a moderate number of pilot symbols (about 5 times the number of users), the AQ scheme can provide an achievable rate close to that of the perfect channel state information (CSI) case.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a learned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.
Millimeter-wave massive MIMO with lens antenna array can considerably reduce the number of required radio-frequency (RF) chains by beam selection. However, beam selection requires the base station to acquire the accurate information of beamspace channel. This is a challenging task, as the size of beamspace channel is large while the number of RF chains is limited. In this paper, we investigate the beamspace channel estimation problem in mmWave massive MIMO systems with lens antenna array. Specifically, we first design an adaptive selecting network for mmWave massive MIMO systems with lens antenna array, and based on this network, we further formulate the beamspace channel estimation problem as a sparse signal recovery problem. Then, by fully utilizing the structural characteristics of mmWave beamspace channel, we propose a support detection (SD)-based channel estimation scheme with reliable performance and low pilot overhead. Finally, the performance and complexity analyses are provided to prove that the proposed SD-based channel estimation scheme can estimate the support of sparse beamspace channel with comparable or higher accuracy than conventional schemes. Simulation results verify that the proposed SD-based channel estimation scheme outperforms conventional schemes and enjoys satisfying accuracy, even in the low SNR region as the structural characteristics of beamspace channel can be exploited.