ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Learning Based OFDM Channel Estimation Using Frequency-Time Division and Attention Mechanism

83   0   0.0 ( 0 )
 نشر من قبل Ang Yang
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a frequency-time division network (FreqTimeNet) to improve the performance of deep learning (DL) based OFDM channel estimation. This FreqTimeNet is designed based on the orthogonality between the frequency domain and the time domain. In FreqTimeNet, the input is processed by parallel frequency blocks and parallel time blocks in sequential. Introducing the attention mechanism to use the SNR information, an attention based FreqTimeNet (AttenFreqTimeNet) is proposed. Using 3rd Generation Partnership Project (3GPP) channel models, the mean square error (MSE) performance of FreqTimeNet and AttenFreqTimeNet under different scenarios is evaluated. A method for constructing mixed training data is proposed, which could address the generalization problem in DL. It is observed that AttenFreqTimeNet outperforms FreqTimeNet, and FreqTimeNet outperforms other DL networks, with acceptable complexity.



قيم البحث

اقرأ أيضاً

125 - Jiabao Gao , Mu Hu , Caijun Zhong 2021
Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the divide-and-conquer policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.
Wi-Fi systems based on the IEEE 802.11 standards are the most popular wireless interfaces that use Listen Before Talk (LBT) method for channel access. The distinctive feature of a majority of LBT-based systems is that the transmitters use preambles t hat precede the data to allow the receivers to perform packet detection and carrier frequency offset (CFO) estimation. Preambles usually contain repetitions of training symbols with good correlation properties, while conventional digital receivers apply correlation-based methods for both packet detection and CFO estimation. However, in recent years, data-based machine learning methods are disrupting physical layer research. Promising results have been presented, in particular, in the domain of deep learning (DL)-based channel estimation. In this paper, we present a performance and complexity analysis of packet detection and CFO estimation using both the conventional and the DL-based approaches. The goal of the study is to investigate under which conditions the performance of the DL-based methods approach or even surpass the conventional methods, but also, under which conditions their performance is inferior. Focusing on the emerging IEEE 802.11ah standard, our investigation uses both the standard-based simulated environment, and a real-world testbed based on Software Defined Radios.
It is well known that CS can boost massive random access protocols. Usually, the protocols operate in some overloaded regime where the sparsity can be exploited. In this paper, we consider a different approach by taking an orthogonal FFT base, subdiv ide its image into appropriate sub-channels and let each subchannel take only a fraction of the load. To show that this approach can actually achieve the full capacity we provide i) new concentration inequalities, and ii) devise a sparsity capture effect, i.e where the sub-division can be driven such that the activity in each each sub-channel is sparse by design. We show by simulations that the system is scalable resulting in a coarsely 30-fold capacity increase.
This paper analyzes the impact of non-Gaussian multipath component (MPC) amplitude distributions on the performance of Compressed Sensing (CS) channel estimators for OFDM systems. The number of dominant MPCs that any CS algorithm needs to estimate in order to accurately represent the channel is characterized. This number relates to a Compressibility Index (CI) of the channel that depends on the fourth moment of the MPC amplitude distribution. A connection between the Mean Squared Error (MSE) of any CS estimation algorithm and the MPC amplitude distribution fourth moment is revealed that shows a smaller number of MPCs is needed to well-estimate channels when these components have large fourth moment amplitude gains. The analytical results are validated via simulations for channels with lognormal MPCs such as the NYU mmWave channel model. These simulations show that when the MPC amplitude distribution has a high fourth moment, the well known CS algorithm of Orthogonal Matching Pursuit performs almost identically to the Basis Pursuit De-Noising algorithm with a much lower computational cost.
81 - Lei Wan , Jiang Zhu , En Cheng 2021
In this paper, we propose an iterative receiver based on gridless variational Bayesian line spectra estimation (VALSE) named JCCD-VALSE that emph{j}ointly estimates the emph{c}arrier frequency offset (CFO), the emph{c}hannel with high resolution and carries out emph{d}ata decoding. Based on a modularized point of view and motivated by the high resolution and low complexity gridless VALSE algorithm, three modules named the VALSE module, the minimum mean squared error (MMSE) module and the decoder module are built. Soft information is exchanged between the modules to progressively improve the channel estimation and data decoding accuracy. Since the delays of multipaths of the channel are treated as continuous parameters, instead of on a grid, the leakage effect is avoided. Besides, the proposed approach is a more complete Bayesian approach as all the nuisance parameters such as the noise variance, the parameters of the prior distribution of the channel, the number of paths are automatically estimated. Numerical simulations and sea test data are utilized to demonstrate that the proposed approach performs significantly better than the existing grid-based generalized approximate message passing (GAMP) based emph{j}oint emph{c}hannel and emph{d}ata decoding approach (JCD-GAMP). Furthermore, it is also verified that joint processing including CFO estimation provides performance gain.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا