Do you want to publish a course? Click here

DNN-Aided Message Passing Based Block Sparse Bayesian Learning for Joint User Activity Detection and Channel Estimation

147   0   0.0 ( 0 )
 Added by Zhaoji Zhang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Faced with the massive connection, sporadic transmission, and small-sized data packets in future cellular communication, a grant-free non-orthogonal random access (NORA) system is considered in this paper, which could reduce the access delay and support more devices. In order to address the joint user activity detection (UAD) and channel estimation (CE) problem in the grant-free NORA system, we propose a deep neural network-aided message passing-based block sparse Bayesian learning (DNN-MP-BSBL) algorithm. In this algorithm, the message passing process is transferred from a factor graph to a deep neural network (DNN). Weights are imposed on the messages in the DNN and trained to minimize the estimation error. It is shown that the weights could alleviate the convergence problem of the MP-BSBL algorithm. Simulation results show that the proposed DNN-MP-BSBL algorithm could improve the UAD and CE accuracy with a smaller number of iterations.



rate research

Read More

In the upcoming Internet-of-Things (IoT) era, the communication is often featured by massive connection, sporadic transmission, and small-sized data packets, which poses new requirements on the delay expectation and resource allocation efficiency of the Random Access (RA) mechanisms of the IoT communication stack. A grant-free non-orthogonal random access (NORA) system is considered in this paper, which could simultaneously reduce the access delay and support more Machine Type Communication (MTC) devices with limited resources. In order to address the joint user activity detection (UAD) and channel estimation (CE) problem in the grant-free NORA system, we propose a deep neural network-aided message passing-based block sparse Bayesian learning (DNN-MP-BSBL) algorithm. In the DNN-MP-BSBL algorithm, the iterative message passing process is transferred from a factor graph to a deep neural network (DNN). Weights are imposed on the messages in the DNN and trained to minimize the estimation error. It is shown that the trained weights could alleviate the convergence problem of the MP-BSBL algorithm, especially on crowded RA scenarios. Simulation results show that the proposed DNN-MP-BSBL algorithm could improve the UAD and CE accuracy with a smaller number of iterations, indicating its advantages for low-latency grant-free NORA systems.
Grant-free random access is a promising protocol to support massive access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT) with sporadic traffic. Specifically, in each coherence interval, the base station (BS) performs joint activity detection and channel estimation (JADCE) before data transmission. Due to the deployment of a large-scale antennas array and the existence of a huge number of IoT devices, JADCE usually has high computational complexity and needs long pilot sequences. To solve these challenges, this paper proposes a dimension reduction method, which projects the original device state matrix to a low-dimensional space by exploiting its sparse and low-rank structure. Then, we develop an optimized design framework with a coupled full column rank constraint for JADCE to reduce the size of the search space. However, the resulting problem is non-convex and highly intractable, for which the conventional convex relaxation approaches are inapplicable. To this end, we propose a logarithmic smoothing method for the non-smoothed objective function and transform the interested matrix to a positive semidefinite matrix, followed by giving a Riemannian trust-region algorithm to solve the problem in complex field. Simulation results show that the proposed algorithm is efficient to a large-scale JADCE problem and requires shorter pilot sequences than the state-of-art algorithms which only exploit the sparsity of device state matrix.
Millimeter-wave/Terahertz (mmW/THz) communications have shown great potential for wideband massive access in next-generation cellular internet of things (IoT) networks. To decrease the length of pilot sequences and the computational complexity in wideband massive access, this paper proposes a novel joint activity detection and channel estimation (JADCE) algorithm. Specifically, after formulating JADCE as a problem of recovering a simultaneously sparse-group and low rank matrix according to the characteristics of mmW/THz channel, we prove that jointly imposing $l_1$ norm and low rank on such a matrix can achieve a robust recovery under sufficient conditions, and verify that the number of measurements derived for the mmW/THz wideband massive access system is significantly smaller than currently known measurements bound derived for the conventional simultaneously sparse and low-rank recovery. Furthermore, we propose a multi-rank aware method by exploiting the quotient geometry of product of complex rank-$L$ matrices with the number of scattering clusters $L$. Theoretical analysis and simulation results confirm the superiority of the proposed algorithm in terms of computational complexity, detection error rate, and channel estimation accuracy.
This paper considers crowded massive multiple input multiple output (MIMO) communications over a Rician fading channel, where the number of users is much greater than the number of available pilot sequences. A joint user identification and line-of-sight (LOS) component derivation algorithm is proposed without requiring a threshold. Based on the derived LOS component, we design a LOS-only channel estimator and an updated channel estimator.
Reconfigurable intelligent surfaces (RISs) have been recently considered as a promising candidate for energy-efficient solutions in future wireless networks. Their dynamic and lowpower configuration enables coverage extension, massive connectivity, and low-latency communications. Due to a large number of unknown variables referring to the RIS unit elements and the transmitted signals, channel estimation and signal recovery in RIS-based systems are the ones of the most critical technical challenges. To address this problem, we focus on the RIS-assisted multi-user wireless communication system and present a joint channel estimation and signal recovery algorithm in this paper. Specifically, we propose a bidirectional approximate message passing algorithm that applies the Taylor series expansion and Gaussian approximation to simplify the sum-product algorithm in the formulated problem. Our simulation results show that the proposed algorithm shows the superiority over a state-of-art benchmark method. We also provide insights on the impact of different RIS parameter settings on the proposed algorithms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا