No Arabic abstract
Millimeter-wave/Terahertz (mmW/THz) communications have shown great potential for wideband massive access in next-generation cellular internet of things (IoT) networks. To decrease the length of pilot sequences and the computational complexity in wideband massive access, this paper proposes a novel joint activity detection and channel estimation (JADCE) algorithm. Specifically, after formulating JADCE as a problem of recovering a simultaneously sparse-group and low rank matrix according to the characteristics of mmW/THz channel, we prove that jointly imposing $l_1$ norm and low rank on such a matrix can achieve a robust recovery under sufficient conditions, and verify that the number of measurements derived for the mmW/THz wideband massive access system is significantly smaller than currently known measurements bound derived for the conventional simultaneously sparse and low-rank recovery. Furthermore, we propose a multi-rank aware method by exploiting the quotient geometry of product of complex rank-$L$ matrices with the number of scattering clusters $L$. Theoretical analysis and simulation results confirm the superiority of the proposed algorithm in terms of computational complexity, detection error rate, and channel estimation accuracy.
Terahertz (THz) communication is considered to be a promising technology for future 6G network. To overcome the severe attenuation and relieve the high power consumption, massive MIMO with hybrid precoding has been widely considered for THz communication. However, accurate wideband channel estimation is challenging in THz massive MIMO systems. The existing wideband channel estimation schemes based on the ideal assumption of common sparse channel support will suffer from a severe performance loss due to the beam split effect. In this paper, we propose a beam split pattern detection based channel estimation scheme to realize reliable wideband channel estimation. Specifically, a comprehensive analysis on the angle-domain sparse structure of the wideband channel is provided by considering the beam split effect. Based on the analysis, we define a series of index sets called as beam split patterns, which are proved to have a one-to-one match to different physical channel directions. Inspired by this one-to-one match, we propose to estimate the physical channel direction by exploiting beam split patterns at first. Then, the sparse channel supports at different subcarriers can be obtained by utilizing a support detection window. This support detection window is generated by expanding the beam split pattern which is determined by the obtained physical channel direction. The above estimation procedure will be repeated path by path until all path components are estimated. The proposed scheme exploits the wideband channel property implied by the beam split effect, which can significantly improve the channel estimation accuracy. Simulation results show that the proposed scheme is able to achieve higher accuracy than existing schemes.
Grant-free random access is a promising protocol to support massive access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT) with sporadic traffic. Specifically, in each coherence interval, the base station (BS) performs joint activity detection and channel estimation (JADCE) before data transmission. Due to the deployment of a large-scale antennas array and the existence of a huge number of IoT devices, JADCE usually has high computational complexity and needs long pilot sequences. To solve these challenges, this paper proposes a dimension reduction method, which projects the original device state matrix to a low-dimensional space by exploiting its sparse and low-rank structure. Then, we develop an optimized design framework with a coupled full column rank constraint for JADCE to reduce the size of the search space. However, the resulting problem is non-convex and highly intractable, for which the conventional convex relaxation approaches are inapplicable. To this end, we propose a logarithmic smoothing method for the non-smoothed objective function and transform the interested matrix to a positive semidefinite matrix, followed by giving a Riemannian trust-region algorithm to solve the problem in complex field. Simulation results show that the proposed algorithm is efficient to a large-scale JADCE problem and requires shorter pilot sequences than the state-of-art algorithms which only exploit the sparsity of device state matrix.
Terahertz (THz) communication is widely considered as a key enabler for future 6G wireless systems. However, THz links are subject to high propagation losses and inter-symbol interference due to the frequency selectivity of the channel. Massive multiple-input multiple-output (MIMO) along with orthogonal frequency division multiplexing (OFDM) can be used to deal with these problems. Nevertheless, when the propagation delay across the base station (BS) antenna array exceeds the symbol period, the spatial response of the BS array varies across the OFDM subcarriers. This phenomenon, known as beam squint, renders narrowband combining approaches ineffective. Additionally, channel estimation becomes challenging in the absence of combining gain during the training stage. In this work, we address the channel estimation and hybrid combining problems in wideband THz massive MIMO with uniform planar arrays. Specifically, we first introduce a low-complexity beam squint mitigation scheme based on true-time-delay. Next, we propose a novel variant of the popular orthogonal matching pursuit (OMP) algorithm to accurately estimate the channel with low training overhead. Our channel estimation and hybrid combining schemes are analyzed both theoretically and numerically. Moreover, the proposed schemes are extended to the multi-antenna user case. Simulation results are provided showcasing the performance gains offered by our design compared to standard narrowband combining and OMP-based channel estimation.
In the massive machine-type communication (mMTC) scenario, a large number of devices with sporadic traffic need to access the network on limited radio resources. While grant-free random access has emerged as a promising mechanism for massive access, its potential has not been fully unleashed. In particular, the common sparsity pattern in the received pilot and data signal has been ignored in most existing studies, and auxiliary information of channel decoding has not been utilized for user activity detection. This paper endeavors to develop advanced receivers in a holistic manner for joint activity detection, channel estimation, and data decoding. In particular, a turbo receiver based on the bilinear generalized approximate message passing (BiG-AMP) algorithm is developed. In this receiver, all the received symbols will be utilized to jointly estimate the channel state, user activity, and soft data symbols, which effectively exploits the common sparsity pattern. Meanwhile, the extrinsic information from the channel decoder will assist the joint channel estimation and data detection. To reduce the complexity, a low-cost side information-aided receiver is also proposed, where the channel decoder provides side information to update the estimates on whether a user is active or not. Simulation results show that the turbo receiver is able to reduce the activity detection, channel estimation, and data decoding errors effectively, while the side information-aided receiver notably outperforms the conventional method with a relatively low complexity.
Faced with the massive connection, sporadic transmission, and small-sized data packets in future cellular communication, a grant-free non-orthogonal random access (NORA) system is considered in this paper, which could reduce the access delay and support more devices. In order to address the joint user activity detection (UAD) and channel estimation (CE) problem in the grant-free NORA system, we propose a deep neural network-aided message passing-based block sparse Bayesian learning (DNN-MP-BSBL) algorithm. In this algorithm, the message passing process is transferred from a factor graph to a deep neural network (DNN). Weights are imposed on the messages in the DNN and trained to minimize the estimation error. It is shown that the weights could alleviate the convergence problem of the MP-BSBL algorithm. Simulation results show that the proposed DNN-MP-BSBL algorithm could improve the UAD and CE accuracy with a smaller number of iterations.