Do you want to publish a course? Click here

User Activity Detection and Channel Estimation for Grant-Free Random Access in LEO Satellite-Enabled Internet-of-Things

97   0   0.0 ( 0 )
 Added by Zhaoji Zhang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

With recent advances on the dense low-earth orbit (LEO) constellation, LEO satellite network has become one promising solution to providing global coverage for Internet-of-Things (IoT) services. Confronted with the sporadic transmission from randomly activated IoT devices, we consider the random access (RA) mechanism, and propose a grant-free RA (GF-RA) scheme to reduce the access delay to the mobile LEO satellites. A Bernoulli-Rician message passing with expectation maximization (BR-MP-EM) algorithm is proposed for this terrestrial-satellite GF-RA system to address the user activity detection (UAD) and channel estimation (CE) problem. This BR-MP-EM algorithm is divided into two stages. In the inner iterations, the Bernoulli messages and Rician messages are updated for the joint UAD and CE problem. Based on the output of the inner iterations, the expectation maximization (EM) method is employed in the outer iterations to update the hyper-parameters related to the channel impairments. Finally, simulation results show the UAD and CE accuracy of the proposed BR-MP-EM algorithm, as well as the robustness against the channel impairments.



rate research

Read More

In the upcoming Internet-of-Things (IoT) era, the communication is often featured by massive connection, sporadic transmission, and small-sized data packets, which poses new requirements on the delay expectation and resource allocation efficiency of the Random Access (RA) mechanisms of the IoT communication stack. A grant-free non-orthogonal random access (NORA) system is considered in this paper, which could simultaneously reduce the access delay and support more Machine Type Communication (MTC) devices with limited resources. In order to address the joint user activity detection (UAD) and channel estimation (CE) problem in the grant-free NORA system, we propose a deep neural network-aided message passing-based block sparse Bayesian learning (DNN-MP-BSBL) algorithm. In the DNN-MP-BSBL algorithm, the iterative message passing process is transferred from a factor graph to a deep neural network (DNN). Weights are imposed on the messages in the DNN and trained to minimize the estimation error. It is shown that the trained weights could alleviate the convergence problem of the MP-BSBL algorithm, especially on crowded RA scenarios. Simulation results show that the proposed DNN-MP-BSBL algorithm could improve the UAD and CE accuracy with a smaller number of iterations, indicating its advantages for low-latency grant-free NORA systems.
326 - Xinyu Bian , Yuyi Mao , Jun Zhang 2021
In the massive machine-type communication (mMTC) scenario, a large number of devices with sporadic traffic need to access the network on limited radio resources. While grant-free random access has emerged as a promising mechanism for massive access, its potential has not been fully unleashed. In particular, the common sparsity pattern in the received pilot and data signal has been ignored in most existing studies, and auxiliary information of channel decoding has not been utilized for user activity detection. This paper endeavors to develop advanced receivers in a holistic manner for joint activity detection, channel estimation, and data decoding. In particular, a turbo receiver based on the bilinear generalized approximate message passing (BiG-AMP) algorithm is developed. In this receiver, all the received symbols will be utilized to jointly estimate the channel state, user activity, and soft data symbols, which effectively exploits the common sparsity pattern. Meanwhile, the extrinsic information from the channel decoder will assist the joint channel estimation and data detection. To reduce the complexity, a low-cost side information-aided receiver is also proposed, where the channel decoder provides side information to update the estimates on whether a user is active or not. Simulation results show that the turbo receiver is able to reduce the activity detection, channel estimation, and data decoding errors effectively, while the side information-aided receiver notably outperforms the conventional method with a relatively low complexity.
Millimeter-wave/Terahertz (mmW/THz) communications have shown great potential for wideband massive access in next-generation cellular internet of things (IoT) networks. To decrease the length of pilot sequences and the computational complexity in wideband massive access, this paper proposes a novel joint activity detection and channel estimation (JADCE) algorithm. Specifically, after formulating JADCE as a problem of recovering a simultaneously sparse-group and low rank matrix according to the characteristics of mmW/THz channel, we prove that jointly imposing $l_1$ norm and low rank on such a matrix can achieve a robust recovery under sufficient conditions, and verify that the number of measurements derived for the mmW/THz wideband massive access system is significantly smaller than currently known measurements bound derived for the conventional simultaneously sparse and low-rank recovery. Furthermore, we propose a multi-rank aware method by exploiting the quotient geometry of product of complex rank-$L$ matrices with the number of scattering clusters $L$. Theoretical analysis and simulation results confirm the superiority of the proposed algorithm in terms of computational complexity, detection error rate, and channel estimation accuracy.
110 - Dongdong Jiang , , Ying Cui 2021
Device activity detection is one main challenge in grant-free massive access, which is recently proposed to support massive machine-type communications (mMTC). Existing solutions for device activity detection fail to consider inter-cell interference generated by massive IoT devices or important prior information on device activities and inter-cell interference. In this paper, given different numbers of observations and network parameters, we consider both non-cooperative device activity detection and cooperative device activity detection in a multi-cell network, consisting of many access points (APs) and IoT devices. Under each activity detection mechanism, we consider the joint maximum likelihood (ML) estimation and joint maximum a posterior probability (MAP) estimation of both device activities and interference powers, utilizing tools from probability, stochastic geometry, and optimization. Each estimation problem is a challenging non-convex problem, and a coordinate descent algorithm is proposed to obtain a stationary point. Each proposed joint ML estimation extends the existing one for a single-cell network by considering the estimation of interference powers, together with the estimation of device activities. Each proposed joint MAP estimation further enhances the corresponding joint ML estimation by exploiting prior distributions of device activities and interference powers. The proposed joint ML estimation and joint MAP estimation under cooperative detection outperform the respective ones under non-cooperative detection at the costs of increasing backhaul burden, knowledge of network parameters, and computational complexities.
This paper designs a cooperative activity detection framework for massive grant-free random access in the sixth-generation (6G) cell-free wireless networks based on the covariance of the received signals at the access points (APs). In particular, multiple APs cooperatively detect the device activity by only exchanging the low-dimensional intermediate local information with their neighbors. The cooperative activity detection problem is non-smooth and the unknown variables are coupled with each other for which conventional approaches are inapplicable. Therefore, this paper proposes a covariance-based algorithm by exploiting the sparsity-promoting and similarity-promoting terms of the device state vectors among neighboring APs. An approximate splitting approach is proposed based on the proximal gradient method for solving the formulated problem. Simulation results show that the proposed algorithm is efficient for large-scale activity detection problems while requires shorter pilot sequences compared with the state-of-art algorithms in achieving the same system performance.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا