ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive Channel Estimation Based on Model-Driven Deep Learning for Wideband mmWave Systems

225   0   0.0 ( 0 )
 نشر من قبل Weijie Jin
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Channel estimation in wideband millimeter-wave (mmWave) systems is very challenging due to the beam squint effect. To solve the problem, we propose a learnable iterative shrinkage thresholding algorithm-based channel estimator (LISTA-CE) based on deep learning. The proposed channel estimator can learn to transform the beam-frequency mmWave channel into the domain with sparse features through training data. The transform domain enables us to adopt a simple denoiser with few trainable parameters. We further enhance the adaptivity of the estimator by introducing emph{hypernetwork} to automatically generate learnable parameters for LISTA-CE online. Simulation results show that the proposed approach can significantly outperform the state-of-the-art deep learning-based algorithms with lower complexity and fewer parameters and adapt to new scenarios rapidly.



قيم البحث

اقرأ أيضاً

129 - Zhipeng Lin , Tiejun Lv , Wei Ni 2020
Channel estimation is challenging for hybrid millimeter wave (mmWave) large-scale antenna arrays which are promising in 5G/B5G applications. The challenges are associated with angular resolution losses resulting from hybrid front-ends, beam squinting , and susceptibility to the receiver noises. Based on tensor signal processing, this paper presents a novel multi-dimensional approach to channel parameter estimation with large-scale mmWave hybrid uniform circular cylindrical arrays (UCyAs) which are compact in size and immune to mutual coupling but known to suffer from infinite-dimensional array responses and intractability. We design a new resolution-preserving hybrid beamformer and a low-complexity beam squinting suppression method, and reveal the existence of shift-invariance relations in the tensor models of received array signals at the UCyA. Exploiting these relations, we propose a new tensor-based subspace estimation algorithm to suppress the receiver noises in all dimensions (time, frequency, and space). The algorithm can accurately estimate the channel parameters from both coherent and incoherent signals. Corroborated by the Cram{e}r-Rao lower bound (CRLB), simulation results show that the proposed algorithm is able to achieve substantially higher estimation accuracy than existing matrix-based techniques, with a comparable computational complexity.
112 - Hengtao He , Rui Wang , Weijie Jin 2020
Millimeter-wave (mmWave) communications have been one of the promising technologies for future wireless networks that integrate a wide range of data-demanding applications. To compensate for the large channel attenuation in mmWave band and avoid high hardware cost, a lens-based beamspace massive multiple-input multiple-output (MIMO) system is considered. However, the beam squint effect in wideband mmWave systems makes channel estimation very challenging, especially when the receiver is equipped with a limited number of radio-frequency (RF) chains. Furthermore, the real channel data cannot be obtained before the mmWave system is used in a new environment, which makes it impossible to train a deep learning (DL)-based channel estimator using real data set beforehand. To solve the problem, we propose a model-driven unsupervised learning network, named learned denoising-based generalized expectation consistent (LDGEC) signal recovery network. By utilizing the Steins unbiased risk estimator loss, the LDGEC network can be trained only with limited measurements corresponding to the pilot symbols, instead of the real channel data. Even if designed for unsupervised learning, the LDGEC network can be supervisingly trained with the real channel via the denoiser-by-denoiser way. The numerical results demonstrate that the LDGEC-based channel estimator significantly outperforms state-of-the-art compressive sensing-based algorithms when the receiver is equipped with a small number of RF chains and low-resolution ADCs.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a le arned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.
A reconfigurable intelligent surface (RIS) can shape the radio propagation environment by virtue of changing the impinging electromagnetic waves towards any desired directions, thus, breaking the general Snells reflection law. However, the optimal co ntrol of the RIS requires perfect channel state information (CSI) of the individual channels that link the base station (BS) and the mobile station (MS) to each other via the RIS. Thereby super-resolution channel (parameter) estimation needs to be efficiently conducted at the BS or MS with CSI feedback to the RIS controller. In this paper, we adopt a two-stage channel estimation scheme for RIS-aided millimeter wave (mmWave) MIMO systems without a direct BS-MS channel, using atomic norm minimization to sequentially estimate the channel parameters, i.e., angular parameters, angle differences, and products of propagation path gains. We evaluate the mean square error of the parameter estimates, the RIS gains, the average effective spectrum efficiency bound, and average squared distance between the designed beamforming and combining vectors and the optimal ones. The results demonstrate that the proposed scheme achieves super-resolution estimation compared to the existing benchmark schemes, thus offering promising performance in the subsequent data transmission phase.
Unmanned aerial vehicle (UAV) millimeter wave (mmWave) technologies can provide flexible link and high data rate for future communication networks. By considering the new features of three-dimensional (3D) scattering space, 3D velocity, 3D antenna ar ray, and especially 3D rotations, a machine learning (ML) integrated UAV-to-Vehicle (U2V) mmWave channel model is proposed. Meanwhile, a ML-based network for channel parameter calculation and generation is developed. The deterministic parameters are calculated based on the simplified geometry information, while the random ones are generated by the back propagation based neural network (BPNN) and generative adversarial network (GAN), where the training data set is obtained from massive ray-tracing (RT) simulations. Moreover, theoretical expressions of channel statistical properties, i.e., power delay profile (PDP), autocorrelation function (ACF), Doppler power spectrum density (DPSD), and cross-correlation function (CCF) are derived and analyzed. Finally, the U2V mmWave channel is generated under a typical urban scenario at 28 GHz. The generated PDP and DPSD show good agreement with RT-based results, which validates the effectiveness of proposed method. Moreover, the impact of 3D rotations, which has rarely been reported in previous works, can be observed in the generated CCF and ACF, which are also consistent with the theoretical and measurement results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا