Do you want to publish a course? Click here

Feature-Aided Adaptive-Tuning Deep Learning for Massive Device Detection

92   0   0.0 ( 0 )
 Added by Xiaoming Chen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

With the increasing development of Internet of Things (IoT), the upcoming sixth-generation (6G) wireless network is required to support grant-free random access of a massive number of sporadic traffic devices. In particular, at the beginning of each time slot, the base station (BS) performs joint activity detection and channel estimation (JADCE) based on the received pilot sequences sent from active devices. Due to the deployment of a large-scale antenna array and the existence of a massive number of IoT devices, conventional JADCE approaches usually have high computational complexity and need long pilot sequences. To solve these challenges, this paper proposes a novel deep learning framework for JADCE in 6G wireless networks, which contains a dimension reduction module, a deep learning network module, an active device detection module, and a channel estimation module. Then, prior-feature learning followed by an adaptive-tuning strategy is proposed, where an inner network composed of the Expectation-maximization (EM) and back-propagation is introduced to jointly tune the precision and learn the distribution parameters of the device state matrix. Finally, by designing the inner layer-by-layer and outer layer-by-layer training method, a feature-aided adaptive-tuning deep learning network is built. Both theoretical analysis and simulation results confirm that the proposed deep learning framework has low computational complexity and needs short pilot sequences in practical scenarios.

rate research

Read More

125 - Jiabao Gao , Mu Hu , Caijun Zhong 2021
Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the divide-and-conquer policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.
A deep-learning-aided successive-cancellation list (DL-SCL) decoding algorithm for polar codes is introduced with deep-learning-aided successive-cancellation (DL-SC) decoding being a specific case of it. The DL-SCL decoder works by allowing additional rounds of SCL decoding when the first SCL decoding attempt fails, using a novel bit-flipping metric. The proposed bit-flipping metric exploits the inherent relations between the information bits in polar codes that are represented by a correlation matrix. The correlation matrix is then optimized using emerging deep-learning techniques. Performance results on a polar code of length 128 with 64 information bits concatenated with a 24-bit cyclic redundancy check show that the proposed bit-flipping metric in the proposed DL-SCL decoder requires up to 66% fewer multiplications and up to 36% fewer additions, without any need to perform transcendental functions, and by providing almost the same error-correction performance in comparison with the state of the art.
Pilot contamination, defined as the interference during the channel estimation process due to reusing the same pilot sequences in neighboring cells, can severely degrade the performance of massive multiple-input multiple-output systems. In this paper, we propose a location-based approach to mitigating the pilot contamination problem for uplink multiple-input multiple-output systems. Our approach makes use of the approximate locations of mobile devices to provide good estimates of the channel statistics between the mobile devices and their corresponding base stations. Specifically, we aim at avoiding pilot contamination even when the number of base station antennas is not very large, and when multiple users from different cells, or even in the same cell, are assigned the same pilot sequence. First, we characterize a desired angular region of the target user at the serving base station based on the number of base station antennas and the location of the target user, and make the observation that in this region the interference is close to zero due to the spatial separability. Second, based on this observation, we propose pilot coordination methods for multi-user multi-cell scenarios to avoid pilot contamination. The numerical results indicate that the proposed pilot contamination avoidance schemes enhance the quality of the channel estimation and thereby improve the per-cell sum rate offered by target base stations.
We propose a deep-learning approach for the joint MIMO detection and channel decoding problem. Conventional MIMO receivers adopt a model-based approach for MIMO detection and channel decoding in linear or iterative manners. However, due to the complex MIMO signal model, the optimal solution to the joint MIMO detection and channel decoding problem (i.e., the maximum likelihood decoding of the transmitted codewords from the received MIMO signals) is computationally infeasible. As a practical measure, the current model-based MIMO receivers all use suboptimal MIMO decoding methods with affordable computational complexities. This work applies the latest advances in deep learning for the design of MIMO receivers. In particular, we leverage deep neural networks (DNN) with supervised training to solve the joint MIMO detection and channel decoding problem. We show that DNN can be trained to give much better decoding performance than conventional MIMO receivers do. Our simulations show that a DNN implementation consisting of seven hidden layers can outperform conventional model-based linear or iterative receivers. This performance improvement points to a new direction for future MIMO receiver design.
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized through deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا