ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Site-Specific Probing Beams for Fast mmWave Beam Alignment

71   0   0.0 ( 0 )
 نشر من قبل Yuqiang Heng
 تاريخ النشر 2021
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Beam alignment - the process of finding an optimal directional beam pair - is a challenging procedure crucial to millimeter wave (mmWave) communication systems. We propose a novel beam alignment method that learns a site-specific probing codebook and uses the probing codebook measurements to predict the optimal narrow beam. An end-to-end neural network (NN) architecture is designed to jointly learn the probing codebook and the beam predictor. The learned codebook consists of site-specific probing beams that can capture particular characteristics of the propagation environment. The proposed method relies on beam sweeping of the learned probing codebook, does not require additional context information, and is compatible with the beam sweeping-based beam alignment framework in 5G. Using realistic ray-tracing datasets, we demonstrate that the proposed method can achieve high beam alignment accuracy and signal-to-noise ratio (SNR) while significantly - by roughly a factor of 3 in our setting - reducing the beam sweeping complexity and latency.



قيم البحث

اقرأ أيضاً

Millimeter wave channels exhibit structure that allows beam alignment with fewer channel measurements than exhaustive beam search. From a compressed sensing (CS) perspective, the received channel measurements are usually obtained by multiplying a CS matrix with a sparse representation of the channel matrix. Due to the constraints imposed by analog processing, designing CS matrices that efficiently exploit the channel structure is, however, challenging. In this paper, we propose an end-to-end deep learning technique to design a structured CS matrix that is well suited to the underlying channel distribution, leveraging both sparsity and the particular spatial structure that appears in vehicular channels. The channel measurements acquired with the designed CS matrix are then used to predict the best beam for link configuration. Simulation results for vehicular communication channels indicate that our deep learning-based approach achieves better beam alignment than standard CS techniques that use the random phase shift-based design.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
This article investigates beam alignment for multi-user millimeter wave (mmWave) massive multi-input multi-output system. Unlike the existing works using machine learning (ML), an alignment method with partial beams using ML (AMPBML) is proposed with out any prior knowledge such as user location information. The neural network (NN) for the AMPBML is trained offline using simulated environments according to the mmWave channel model and is then deployed online to predict the beam distribution vector using partial beams. Afterwards, the beams for all users are all aligned simultaneously based on the indices of the dominant entries of the obtained beam distribution vector. Simulation results demonstrate that the AMPBML outperforms the existing methods, including the adaptive compressed sensing, hierarchical search, and multi-path decomposition and recovery, in terms of the total training time slots and the spectral efficiency.
79 - Ke Ma , Dongxuan He , Hancun Sun 2020
Huge overhead of beam training poses a significant challenge to mmWave communications. To address this issue, beam tracking has been widely investigated whereas existing methods are hard to handle serious multipath interference and non-stationary sce narios. Inspired by the spatial similarity between low-frequency and mmWave channels in non-standalone architectures, this paper proposes to utilize prior low-frequency information to predict the optimal mmWave beam, where deep learning is adopted to enhance the prediction accuracy. Specifically, periodically estimated low-frequency channel state information (CSI) is applied to track the movement of user equipment, and timing offset indicator is proposed to indicate the instant of mmWave beam training relative to low-frequency CSI estimation. Meanwhile, long-short term memory networks based dedicated models are designed to implement the prediction. Simulation results show that our proposed scheme can achieve higher beamforming gain than the conventional methods while requiring little overhead of mmWave beam training.
Deep learning provides powerful means to learn from spectrum data and solve complex tasks in 5G and beyond such as beam selection for initial access (IA) in mmWave communications. To establish the IA between the base station (e.g., gNodeB) and user e quipment (UE) for directional transmissions, a deep neural network (DNN) can predict the beam that is best slanted to each UE by using the received signal strengths (RSSs) from a subset of possible narrow beams. While improving the latency and reliability of beam selection compared to the conventional IA that sweeps all beams, the DNN itself is susceptible to adversarial attacks. We present an adversarial attack by generating adversarial perturbations to manipulate the over-the-air captured RSSs as the input to the DNN. This attack reduces the IA performance significantly and fools the DNN into choosing the beams with small RSSs compared to jamming attacks with Gaussian or uniform noise.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا