ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Learning Based Packet Detection and Carrier Frequency Offset Estimation in IEEE 802.11ah

68   0   0.0 ( 0 )
 نشر من قبل Dejan Vukobratovic
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Wi-Fi systems based on the IEEE 802.11 standards are the most popular wireless interfaces that use Listen Before Talk (LBT) method for channel access. The distinctive feature of a majority of LBT-based systems is that the transmitters use preambles that precede the data to allow the receivers to perform packet detection and carrier frequency offset (CFO) estimation. Preambles usually contain repetitions of training symbols with good correlation properties, while conventional digital receivers apply correlation-based methods for both packet detection and CFO estimation. However, in recent years, data-based machine learning methods are disrupting physical layer research. Promising results have been presented, in particular, in the domain of deep learning (DL)-based channel estimation. In this paper, we present a performance and complexity analysis of packet detection and CFO estimation using both the conventional and the DL-based approaches. The goal of the study is to investigate under which conditions the performance of the DL-based methods approach or even surpass the conventional methods, but also, under which conditions their performance is inferior. Focusing on the emerging IEEE 802.11ah standard, our investigation uses both the standard-based simulated environment, and a real-world testbed based on Software Defined Radios.



قيم البحث

اقرأ أيضاً

In this paper, we propose a frequency-time division network (FreqTimeNet) to improve the performance of deep learning (DL) based OFDM channel estimation. This FreqTimeNet is designed based on the orthogonality between the frequency domain and the tim e domain. In FreqTimeNet, the input is processed by parallel frequency blocks and parallel time blocks in sequential. Introducing the attention mechanism to use the SNR information, an attention based FreqTimeNet (AttenFreqTimeNet) is proposed. Using 3rd Generation Partnership Project (3GPP) channel models, the mean square error (MSE) performance of FreqTimeNet and AttenFreqTimeNet under different scenarios is evaluated. A method for constructing mixed training data is proposed, which could address the generalization problem in DL. It is observed that AttenFreqTimeNet outperforms FreqTimeNet, and FreqTimeNet outperforms other DL networks, with acceptable complexity.
Faced with the massive connection, sporadic transmission, and small-sized data packets in future cellular communication, a grant-free non-orthogonal random access (NORA) system is considered in this paper, which could reduce the access delay and supp ort more devices. In order to address the joint user activity detection (UAD) and channel estimation (CE) problem in the grant-free NORA system, we propose a deep neural network-aided message passing-based block sparse Bayesian learning (DNN-MP-BSBL) algorithm. In this algorithm, the message passing process is transferred from a factor graph to a deep neural network (DNN). Weights are imposed on the messages in the DNN and trained to minimize the estimation error. It is shown that the weights could alleviate the convergence problem of the MP-BSBL algorithm. Simulation results show that the proposed DNN-MP-BSBL algorithm could improve the UAD and CE accuracy with a smaller number of iterations.
Grant-free random access is a promising protocol to support massive access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT) with sporadic traffic. Specifically, in each coherence interval, the base station (BS) performs joint activi ty detection and channel estimation (JADCE) before data transmission. Due to the deployment of a large-scale antennas array and the existence of a huge number of IoT devices, JADCE usually has high computational complexity and needs long pilot sequences. To solve these challenges, this paper proposes a dimension reduction method, which projects the original device state matrix to a low-dimensional space by exploiting its sparse and low-rank structure. Then, we develop an optimized design framework with a coupled full column rank constraint for JADCE to reduce the size of the search space. However, the resulting problem is non-convex and highly intractable, for which the conventional convex relaxation approaches are inapplicable. To this end, we propose a logarithmic smoothing method for the non-smoothed objective function and transform the interested matrix to a positive semidefinite matrix, followed by giving a Riemannian trust-region algorithm to solve the problem in complex field. Simulation results show that the proposed algorithm is efficient to a large-scale JADCE problem and requires shorter pilot sequences than the state-of-art algorithms which only exploit the sparsity of device state matrix.
125 - Jiabao Gao , Mu Hu , Caijun Zhong 2021
Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the divide-and-conquer policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.
A deep learning assisted sum-product detection algorithm (DL-SPA) for faster-than-Nyquist (FTN) signaling is proposed in this paper. The proposed detection algorithm concatenates a neural network to the variable nodes of the conventional factor graph of the FTN system to help the detector converge to the a posterior probabilities based on the received sequence. More specifically, the neural network performs as a function node in the modified factor graph to deal with the residual intersymbol interference (ISI) that is not modeled by the conventional detector with a limited number of ISI taps. We modify the updating rule in the conventional sum-product algorithm so that the neural network assisted detector can be complemented to a Turbo equalization. Furthermore, a simplified convolutional neural network is employed as the neural network function node to enhance the detectors performance and the neural network needs a small number of batches to be trained. Simulation results have shown that the proposed DL-SPA achieves a performance gain up to 2.5 dB with the same bit error rate compared to the conventional sum-product detection algorithm under the same ISI responses.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا