No Arabic abstract
Millimeter-wave (mmWave) communications rely on directional transmissions to overcome severe path loss. Nevertheless, the use of narrow beams complicates the initial access procedure and increase the latency as the transmitter and receiver beams should be aligned for a proper link establishment. In this paper, we investigate the feasibility of random beamforming for the cell-search phase of initial access. We develop a stochastic geometry framework to analyze the performance in terms of detection failure probability and expected latency of initial access as well as total data transmission. Meanwhile, we compare our scheme with the widely used exhaustive search and iterative search schemes, in both control plane and data plane. Our numerical results show that, compared to the other two schemes, random beamforming can substantially reduce the latency of initial access with comparable failure probability in dense networks. We show that the gain of the random beamforming is more prominent in light traffics and low-latency services. Our work demonstrates that developing complex cell-discovery algorithms may be unnecessary in dense mmWave networks and thus shed new lights on mmWave network design.
We present DeepIA, a deep neural network (DNN) framework for enabling fast and reliable initial access for AI-driven beyond 5G and 6G millimeter (mmWave) networks. DeepIA reduces the beam sweep time compared to a conventional exhaustive search-based IA process by utilizing only a subset of the available beams. DeepIA maps received signal strengths (RSSs) obtained from a subset of beams to the beam that is best oriented to the receiver. In both line of sight (LoS) and non-line of sight (NLoS) conditions, DeepIA reduces the IA time and outperforms the conventional IAs beam prediction accuracy. We show that the beam prediction accuracy of DeepIA saturates with the number of beams used for IA and depends on the particular selection of the beams. In LoS conditions, the selection of the beams is consequential and improves the accuracy by up to 70%. In NLoS situations, it improves accuracy by up to 35%. We find that, averaging multiple RSS snapshots further reduces the number of beams needed and achieves more than 95% accuracy in both LoS and NLoS conditions. Finally, we evaluate the beam prediction time of DeepIA through embedded hardware implementation and show the improvement over the conventional beam sweeping.
In this paper, we investigate the combination of non-orthogonal multiple access and millimeter-Wave communications (mmWave-NOMA). A downlink cellular system is considered, where an analog phased array is equipped at both the base station and users. A joint Tx-Rx beamforming and power allocation problem is formulated to maximize the achievable sum rate (ASR) subject to a minimum rate constraint for each user. As the problem is non-convex, we propose a sub-optimal solution with three stages. In the first stage, the optimal power allocation with a closed form is obtained for an arbitrary fixed Tx-Rx beamforming. In the second stage, the optimal Rx beamforming with a closed form is designed for an arbitrary fixed Tx beamforming. In the third stage, the original problem is reduced to a Tx beamforming problem by using the previous results, and a boundary-compressed particle swarm optimization (BC-PSO) algorithm is proposed to obtain a sub-optimal solution. Extensive performance evaluations are conducted to verify the rational of the proposed solution, and the results show that the proposed sub-optimal solution can achieve a near-upper-bound performance in terms of ASR, which is significantly improved compared with those of the state-of-the-art schemes and the conventional mmWave orthogonal multiple access (mmWave-OMA) system.
Future millimeter-wave (mmWave) systems, 5G cellular or WiFi, must rely on highly directional links to overcome severe pathloss in these frequency bands. Establishing such links requires the mutual discovery of the transmitter and the receiver %in the angular domain potentially leading to a large latency and high energy consumption. In this work, we show that both the discovery latency and energy consumption can be significantly reduced by using fully digital front-ends. In fact, we establish that by reducing the resolution of the fully-digital front-ends we can achieve lower energy consumption compared to both analog and high-resolution digital beamformers. Since beamforming through analog front-ends allows sampling in only one direction at a time, the mobile device is on for a longer time compared to a digital beamformer which can get spatial samples from all directions in one shot. We show that the energy consumed by the analog front-end can be four to six times more than that of the digital front-ends, depending on the size of the employed antenna arrays. We recognize, however, that using fully digital beamforming post beam discovery, i.e., for data transmission, is not viable from a power consumption standpoint. To address this issue, we propose the use of digital beamformers with low-resolution analog to digital converters (4 bits). This reduction in resolution brings the power consumption to the same level as analog beamforming for data transmissions while benefiting from the spatial multiplexing capabilities of fully digital beamforming, thus reducing initial discovery latency and improving energy efficiency.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.
This paper considers a class of multi-channel random access algorithms, where contending devices may send multiple copies (replicas) of their messages to the central base station. We first develop a hypothetical algorithm that delivers a lower estimate for the access delay performance within this class. Further, we propose a feasible access control algorithm achieving low access delay by sending multiple message replicas, which approaches the performance of the hypothetical algorithm. The resulting performance is readily approximated by a simple lower bound, which is derived for a large number of channels.