ترغب بنشر مسار تعليمي؟ اضغط هنا

Joint CFO, Gridless Channel Estimation and Data Detection for Underwater Acoustic OFDM Systems

82   0   0.0 ( 0 )
 نشر من قبل Jiang Zhu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose an iterative receiver based on gridless variational Bayesian line spectra estimation (VALSE) named JCCD-VALSE that emph{j}ointly estimates the emph{c}arrier frequency offset (CFO), the emph{c}hannel with high resolution and carries out emph{d}ata decoding. Based on a modularized point of view and motivated by the high resolution and low complexity gridless VALSE algorithm, three modules named the VALSE module, the minimum mean squared error (MMSE) module and the decoder module are built. Soft information is exchanged between the modules to progressively improve the channel estimation and data decoding accuracy. Since the delays of multipaths of the channel are treated as continuous parameters, instead of on a grid, the leakage effect is avoided. Besides, the proposed approach is a more complete Bayesian approach as all the nuisance parameters such as the noise variance, the parameters of the prior distribution of the channel, the number of paths are automatically estimated. Numerical simulations and sea test data are utilized to demonstrate that the proposed approach performs significantly better than the existing grid-based generalized approximate message passing (GAMP) based emph{j}oint emph{c}hannel and emph{d}ata decoding approach (JCD-GAMP). Furthermore, it is also verified that joint processing including CFO estimation provides performance gain.



قيم البحث

اقرأ أيضاً

Millimeter-wave/Terahertz (mmW/THz) communications have shown great potential for wideband massive access in next-generation cellular internet of things (IoT) networks. To decrease the length of pilot sequences and the computational complexity in wid eband massive access, this paper proposes a novel joint activity detection and channel estimation (JADCE) algorithm. Specifically, after formulating JADCE as a problem of recovering a simultaneously sparse-group and low rank matrix according to the characteristics of mmW/THz channel, we prove that jointly imposing $l_1$ norm and low rank on such a matrix can achieve a robust recovery under sufficient conditions, and verify that the number of measurements derived for the mmW/THz wideband massive access system is significantly smaller than currently known measurements bound derived for the conventional simultaneously sparse and low-rank recovery. Furthermore, we propose a multi-rank aware method by exploiting the quotient geometry of product of complex rank-$L$ matrices with the number of scattering clusters $L$. Theoretical analysis and simulation results confirm the superiority of the proposed algorithm in terms of computational complexity, detection error rate, and channel estimation accuracy.
Grant-free random access is a promising protocol to support massive access in beyond fifth-generation (B5G) cellular Internet-of-Things (IoT) with sporadic traffic. Specifically, in each coherence interval, the base station (BS) performs joint activi ty detection and channel estimation (JADCE) before data transmission. Due to the deployment of a large-scale antennas array and the existence of a huge number of IoT devices, JADCE usually has high computational complexity and needs long pilot sequences. To solve these challenges, this paper proposes a dimension reduction method, which projects the original device state matrix to a low-dimensional space by exploiting its sparse and low-rank structure. Then, we develop an optimized design framework with a coupled full column rank constraint for JADCE to reduce the size of the search space. However, the resulting problem is non-convex and highly intractable, for which the conventional convex relaxation approaches are inapplicable. To this end, we propose a logarithmic smoothing method for the non-smoothed objective function and transform the interested matrix to a positive semidefinite matrix, followed by giving a Riemannian trust-region algorithm to solve the problem in complex field. Simulation results show that the proposed algorithm is efficient to a large-scale JADCE problem and requires shorter pilot sequences than the state-of-art algorithms which only exploit the sparsity of device state matrix.
This paper analyzes the impact of non-Gaussian multipath component (MPC) amplitude distributions on the performance of Compressed Sensing (CS) channel estimators for OFDM systems. The number of dominant MPCs that any CS algorithm needs to estimate in order to accurately represent the channel is characterized. This number relates to a Compressibility Index (CI) of the channel that depends on the fourth moment of the MPC amplitude distribution. A connection between the Mean Squared Error (MSE) of any CS estimation algorithm and the MPC amplitude distribution fourth moment is revealed that shows a smaller number of MPCs is needed to well-estimate channels when these components have large fourth moment amplitude gains. The analytical results are validated via simulations for channels with lognormal MPCs such as the NYU mmWave channel model. These simulations show that when the MPC amplitude distribution has a high fourth moment, the well known CS algorithm of Orthogonal Matching Pursuit performs almost identically to the Basis Pursuit De-Noising algorithm with a much lower computational cost.
Faced with the massive connection, sporadic transmission, and small-sized data packets in future cellular communication, a grant-free non-orthogonal random access (NORA) system is considered in this paper, which could reduce the access delay and supp ort more devices. In order to address the joint user activity detection (UAD) and channel estimation (CE) problem in the grant-free NORA system, we propose a deep neural network-aided message passing-based block sparse Bayesian learning (DNN-MP-BSBL) algorithm. In this algorithm, the message passing process is transferred from a factor graph to a deep neural network (DNN). Weights are imposed on the messages in the DNN and trained to minimize the estimation error. It is shown that the weights could alleviate the convergence problem of the MP-BSBL algorithm. Simulation results show that the proposed DNN-MP-BSBL algorithm could improve the UAD and CE accuracy with a smaller number of iterations.
In this paper, we propose a frequency-time division network (FreqTimeNet) to improve the performance of deep learning (DL) based OFDM channel estimation. This FreqTimeNet is designed based on the orthogonality between the frequency domain and the tim e domain. In FreqTimeNet, the input is processed by parallel frequency blocks and parallel time blocks in sequential. Introducing the attention mechanism to use the SNR information, an attention based FreqTimeNet (AttenFreqTimeNet) is proposed. Using 3rd Generation Partnership Project (3GPP) channel models, the mean square error (MSE) performance of FreqTimeNet and AttenFreqTimeNet under different scenarios is evaluated. A method for constructing mixed training data is proposed, which could address the generalization problem in DL. It is observed that AttenFreqTimeNet outperforms FreqTimeNet, and FreqTimeNet outperforms other DL networks, with acceptable complexity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا