Do you want to publish a course? Click here

Machine Learning-Based 3D Channel Modeling for U2V mmWave Communications

77   0   0.0 ( 0 )
 Added by Kai Mao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Unmanned aerial vehicle (UAV) millimeter wave (mmWave) technologies can provide flexible link and high data rate for future communication networks. By considering the new features of three-dimensional (3D) scattering space, 3D velocity, 3D antenna array, and especially 3D rotations, a machine learning (ML) integrated UAV-to-Vehicle (U2V) mmWave channel model is proposed. Meanwhile, a ML-based network for channel parameter calculation and generation is developed. The deterministic parameters are calculated based on the simplified geometry information, while the random ones are generated by the back propagation based neural network (BPNN) and generative adversarial network (GAN), where the training data set is obtained from massive ray-tracing (RT) simulations. Moreover, theoretical expressions of channel statistical properties, i.e., power delay profile (PDP), autocorrelation function (ACF), Doppler power spectrum density (DPSD), and cross-correlation function (CCF) are derived and analyzed. Finally, the U2V mmWave channel is generated under a typical urban scenario at 28 GHz. The generated PDP and DPSD show good agreement with RT-based results, which validates the effectiveness of proposed method. Moreover, the impact of 3D rotations, which has rarely been reported in previous works, can be observed in the generated CCF and ACF, which are also consistent with the theoretical and measurement results.

rate research

Read More

We present an introduction to model-based machine learning for communication systems. We begin by reviewing existing strategies for combining model-based algorithms and machine learning from a high level perspective, and compare them to the conventional deep learning approach which utilizes established deep neural network (DNN) architectures trained in an end-to-end manner. Then, we focus on symbol detection, which is one of the fundamental tasks of communication receivers. We show how the different strategies of conventional deep architectures, deep unfolding, and DNN-aided hybrid algorithms, can be applied to this problem. The last two approaches constitute a middle ground between purely model-based and solely DNN-based receivers. By focusing on this specific task, we highlight the advantages and drawbacks of each strategy, and present guidelines to facilitate the design of future model-based deep learning systems for communications.
The accuracy of available channel state information (CSI) directly affects the performance of millimeter wave (mmWave) communications. In this article, we provide an overview on CSI acquisition including beam training and channel estimation for mmWave massive multiple-input multiple-output systems. The beam training can avoid the estimation of a large-dimension channel matrix while the channel estimation can flexibly exploit advanced signal processing techniques. After discussing the traditional and machine learning-based approaches in this article, we compare different approaches in terms of spectral efficiency, computational complexity, and overhead.
Channel estimation in wideband millimeter-wave (mmWave) systems is very challenging due to the beam squint effect. To solve the problem, we propose a learnable iterative shrinkage thresholding algorithm-based channel estimator (LISTA-CE) based on deep learning. The proposed channel estimator can learn to transform the beam-frequency mmWave channel into the domain with sparse features through training data. The transform domain enables us to adopt a simple denoiser with few trainable parameters. We further enhance the adaptivity of the estimator by introducing emph{hypernetwork} to automatically generate learnable parameters for LISTA-CE online. Simulation results show that the proposed approach can significantly outperform the state-of-the-art deep learning-based algorithms with lower complexity and fewer parameters and adapt to new scenarios rapidly.
Switch-based hybrid network is a promising implementation for beamforming in large-scale millimetre wave (mmWave) antenna arrays. By fully exploiting the sparse nature of the mmWave channel, such hybrid beamforming reduces complexity and power consumption when compared with a structure based on phase shifters. However, the difficulty of designing an optimum beamformer in the analog domain is prohibitive due to the binary nature of such a switch-based structure. Thus, here we propose a new method for designing a switch-based hybrid beamformer for massive MIMO communications in mmWave bands. We first propose a method for decoupling the joint optimization of analog and digital beamformers by confining the problem to a rank-constrained subspace. We then approximate the solution through two approaches: norm maximization (SHD-NM), and majorization (SHD-QRQU). In the norm maximization method, we propose a modified sequential convex programming (SCP) procedure that maximizes the mutual information while addressing the mismatch incurred from approximating the log-determinant by a Frobenius norm. In the second method, we employ a lower bound on the mutual information by QR factorization. We also introduce linear constraints in order to include frequently-used partially-connected structures. Finally, we show the feasibility, and effectiveness of the proposed methods through several numerical examples. The results demonstrate ability of the proposed methods to track closely the spectral efficiency provided by unconstrained optimal beamformer and phase shifting hybrid beamformer, and outperform a competitor switch-based hybrid beamformer.
129 - Zhipeng Lin , Tiejun Lv , Wei Ni 2020
Channel estimation is challenging for hybrid millimeter wave (mmWave) large-scale antenna arrays which are promising in 5G/B5G applications. The challenges are associated with angular resolution losses resulting from hybrid front-ends, beam squinting, and susceptibility to the receiver noises. Based on tensor signal processing, this paper presents a novel multi-dimensional approach to channel parameter estimation with large-scale mmWave hybrid uniform circular cylindrical arrays (UCyAs) which are compact in size and immune to mutual coupling but known to suffer from infinite-dimensional array responses and intractability. We design a new resolution-preserving hybrid beamformer and a low-complexity beam squinting suppression method, and reveal the existence of shift-invariance relations in the tensor models of received array signals at the UCyA. Exploiting these relations, we propose a new tensor-based subspace estimation algorithm to suppress the receiver noises in all dimensions (time, frequency, and space). The algorithm can accurately estimate the channel parameters from both coherent and incoherent signals. Corroborated by the Cram{e}r-Rao lower bound (CRLB), simulation results show that the proposed algorithm is able to achieve substantially higher estimation accuracy than existing matrix-based techniques, with a comparable computational complexity.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا