No Arabic abstract
Millimeter wave channels exhibit structure that allows beam alignment with fewer channel measurements than exhaustive beam search. From a compressed sensing (CS) perspective, the received channel measurements are usually obtained by multiplying a CS matrix with a sparse representation of the channel matrix. Due to the constraints imposed by analog processing, designing CS matrices that efficiently exploit the channel structure is, however, challenging. In this paper, we propose an end-to-end deep learning technique to design a structured CS matrix that is well suited to the underlying channel distribution, leveraging both sparsity and the particular spatial structure that appears in vehicular channels. The channel measurements acquired with the designed CS matrix are then used to predict the best beam for link configuration. Simulation results for vehicular communication channels indicate that our deep learning-based approach achieves better beam alignment than standard CS techniques that use the random phase shift-based design.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
Beam alignment - the process of finding an optimal directional beam pair - is a challenging procedure crucial to millimeter wave (mmWave) communication systems. We propose a novel beam alignment method that learns a site-specific probing codebook and uses the probing codebook measurements to predict the optimal narrow beam. An end-to-end neural network (NN) architecture is designed to jointly learn the probing codebook and the beam predictor. The learned codebook consists of site-specific probing beams that can capture particular characteristics of the propagation environment. The proposed method relies on beam sweeping of the learned probing codebook, does not require additional context information, and is compatible with the beam sweeping-based beam alignment framework in 5G. Using realistic ray-tracing datasets, we demonstrate that the proposed method can achieve high beam alignment accuracy and signal-to-noise ratio (SNR) while significantly - by roughly a factor of 3 in our setting - reducing the beam sweeping complexity and latency.
Deep learning provides powerful means to learn from spectrum data and solve complex tasks in 5G and beyond such as beam selection for initial access (IA) in mmWave communications. To establish the IA between the base station (e.g., gNodeB) and user equipment (UE) for directional transmissions, a deep neural network (DNN) can predict the beam that is best slanted to each UE by using the received signal strengths (RSSs) from a subset of possible narrow beams. While improving the latency and reliability of beam selection compared to the conventional IA that sweeps all beams, the DNN itself is susceptible to adversarial attacks. We present an adversarial attack by generating adversarial perturbations to manipulate the over-the-air captured RSSs as the input to the DNN. This attack reduces the IA performance significantly and fools the DNN into choosing the beams with small RSSs compared to jamming attacks with Gaussian or uniform noise.
Ultra-Reliable and Low-Latency Communications (URLLC) services in vehicular networks on millimeter-wave bands present a significant challenge, considering the necessity of constantly adjusting the beam directions. Conventional methods are mostly based on classical control theory, e.g., Kalman filter and its variations, which mainly deal with stationary scenarios. Therefore, severe application limitations exist, especially with complicated, dynamic Vehicle-to-Everything (V2X) channels. This paper gives a thorough study of this subject, by first modifying the classical approaches, e.g., Extended Kalman Filter (EKF) and Particle Filter (PF), for non-stationary scenarios, and then proposing a Reinforcement Learning (RL)-based approach that can achieve the URLLC requirements in a typical intersection scenario. Simulation results based on a commercial ray-tracing simulator show that enhanced EKF and PF methods achieve packet delay more than $10$ ms, whereas the proposed deep RL-based method can reduce the latency to about $6$ ms, by extracting context information from the training data.
Huge overhead of beam training poses a significant challenge to mmWave communications. To address this issue, beam tracking has been widely investigated whereas existing methods are hard to handle serious multipath interference and non-stationary scenarios. Inspired by the spatial similarity between low-frequency and mmWave channels in non-standalone architectures, this paper proposes to utilize prior low-frequency information to predict the optimal mmWave beam, where deep learning is adopted to enhance the prediction accuracy. Specifically, periodically estimated low-frequency channel state information (CSI) is applied to track the movement of user equipment, and timing offset indicator is proposed to indicate the instant of mmWave beam training relative to low-frequency CSI estimation. Meanwhile, long-short term memory networks based dedicated models are designed to implement the prediction. Simulation results show that our proposed scheme can achieve higher beamforming gain than the conventional methods while requiring little overhead of mmWave beam training.