No Arabic abstract
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
Deep learning provides powerful means to learn from spectrum data and solve complex tasks in 5G and beyond such as beam selection for initial access (IA) in mmWave communications. To establish the IA between the base station (e.g., gNodeB) and user equipment (UE) for directional transmissions, a deep neural network (DNN) can predict the beam that is best slanted to each UE by using the received signal strengths (RSSs) from a subset of possible narrow beams. While improving the latency and reliability of beam selection compared to the conventional IA that sweeps all beams, the DNN itself is susceptible to adversarial attacks. We present an adversarial attack by generating adversarial perturbations to manipulate the over-the-air captured RSSs as the input to the DNN. This attack reduces the IA performance significantly and fools the DNN into choosing the beams with small RSSs compared to jamming attacks with Gaussian or uniform noise.
We present DeepIA, a deep neural network (DNN) framework for enabling fast and reliable initial access for AI-driven beyond 5G and 6G millimeter (mmWave) networks. DeepIA reduces the beam sweep time compared to a conventional exhaustive search-based IA process by utilizing only a subset of the available beams. DeepIA maps received signal strengths (RSSs) obtained from a subset of beams to the beam that is best oriented to the receiver. In both line of sight (LoS) and non-line of sight (NLoS) conditions, DeepIA reduces the IA time and outperforms the conventional IAs beam prediction accuracy. We show that the beam prediction accuracy of DeepIA saturates with the number of beams used for IA and depends on the particular selection of the beams. In LoS conditions, the selection of the beams is consequential and improves the accuracy by up to 70%. In NLoS situations, it improves accuracy by up to 35%. We find that, averaging multiple RSS snapshots further reduces the number of beams needed and achieves more than 95% accuracy in both LoS and NLoS conditions. Finally, we evaluate the beam prediction time of DeepIA through embedded hardware implementation and show the improvement over the conventional beam sweeping.
Huge overhead of beam training poses a significant challenge to mmWave communications. To address this issue, beam tracking has been widely investigated whereas existing methods are hard to handle serious multipath interference and non-stationary scenarios. Inspired by the spatial similarity between low-frequency and mmWave channels in non-standalone architectures, this paper proposes to utilize prior low-frequency information to predict the optimal mmWave beam, where deep learning is adopted to enhance the prediction accuracy. Specifically, periodically estimated low-frequency channel state information (CSI) is applied to track the movement of user equipment, and timing offset indicator is proposed to indicate the instant of mmWave beam training relative to low-frequency CSI estimation. Meanwhile, long-short term memory networks based dedicated models are designed to implement the prediction. Simulation results show that our proposed scheme can achieve higher beamforming gain than the conventional methods while requiring little overhead of mmWave beam training.
Millimeter wave channels exhibit structure that allows beam alignment with fewer channel measurements than exhaustive beam search. From a compressed sensing (CS) perspective, the received channel measurements are usually obtained by multiplying a CS matrix with a sparse representation of the channel matrix. Due to the constraints imposed by analog processing, designing CS matrices that efficiently exploit the channel structure is, however, challenging. In this paper, we propose an end-to-end deep learning technique to design a structured CS matrix that is well suited to the underlying channel distribution, leveraging both sparsity and the particular spatial structure that appears in vehicular channels. The channel measurements acquired with the designed CS matrix are then used to predict the best beam for link configuration. Simulation results for vehicular communication channels indicate that our deep learning-based approach achieves better beam alignment than standard CS techniques that use the random phase shift-based design.
Millimeter-wave (mmWave) communications rely on directional transmissions to overcome severe path loss. Nevertheless, the use of narrow beams complicates the initial access procedure and increase the latency as the transmitter and receiver beams should be aligned for a proper link establishment. In this paper, we investigate the feasibility of random beamforming for the cell-search phase of initial access. We develop a stochastic geometry framework to analyze the performance in terms of detection failure probability and expected latency of initial access as well as total data transmission. Meanwhile, we compare our scheme with the widely used exhaustive search and iterative search schemes, in both control plane and data plane. Our numerical results show that, compared to the other two schemes, random beamforming can substantially reduce the latency of initial access with comparable failure probability in dense networks. We show that the gain of the random beamforming is more prominent in light traffics and low-latency services. Our work demonstrates that developing complex cell-discovery algorithms may be unnecessary in dense mmWave networks and thus shed new lights on mmWave network design.