No Arabic abstract
Highly directional millimeter wave (mmWave) radios need to perform beam management to establish and maintain reliable links. To do so, existing solutions mostly rely on explicit coordination between the transmitter (TX) and the receiver (RX), which significantly reduces the airtime available for communication and further complicates the network protocol design. This paper advances the state of the art by presenting DeepBeam, a framework for beam management that does not require pilot sequences from the TX, nor any beam sweeping or synchronization from the RX. This is achieved by inferring (i) the Angle of Arrival (AoA) of the beam and (ii) the actual beam being used by the transmitter through waveform-level deep learning on ongoing transmissions between the TX to other receivers. In this way, the RX can associate Signal-to-Noise-Ratio (SNR) levels to beams without explicit coordination with the TX. This is possible because different beam patterns introduce different impairments to the waveform, which can be subsequently learned by a convolutional neural network (CNN). We conduct an extensive experimental data collection campaign where we collect more than 4 TB of mmWave waveforms with (i) 4 phased array antennas at 60.48 GHz, (ii) 2 codebooks containing 24 one-dimensional beams and 12 two-dimensional beams; (iii) 3 receiver gains; (iv) 3 different AoAs; (v) multiple TX and RX locations. Moreover, we collect waveform data with two custom-designed mmWave software-defined radios with fully-digital beamforming architectures at 58 GHz. Results show that DeepBeam (i) achieves accuracy of up to 96%, 84% and 77% with a 5-beam, 12-beam and 24-beam codebook, respectively; (ii) reduces latency by up to 7x with respect to the 5G NR initial beam sweep in a default configuration and with a 12-beam codebook. The waveform dataset and the full DeepBeam code repository are publicly available.
To optimally cover users in millimeter-Wave (mmWave) networks, clustering is needed to identify the number and direction of beams. The mobility of users motivates the need for an online clustering scheme to maintain up-to-date beams towards those clusters. Furthermore, mobility of users leads to varying patterns of clusters (i.e., users move from the coverage of one beam to another), causing dynamic traffic load per beam. As such, efficient radio resource allocation and beam management is needed to address the dynamicity that arises from mobility of users and their traffic. In this paper, we consider the coexistence of Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile BroadBand (eMBB) users in 5G mmWave networks and propose a Quality-of-Service (QoS) aware clustering and resource allocation scheme. Specifically, Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is used for online clustering of users and the selection of the number of beams. In addition, Long Short Term Memory (LSTM)-based Deep Reinforcement Learning (DRL) scheme is used for resource block allocation. The performance of the proposed scheme is compared to a baseline that uses K-means and priority-based proportional fairness for clustering and resource allocation, respectively. Our simulation results show that the proposed scheme outperforms the baseline algorithm in terms of latency, reliability, and rate of URLLC users as well as rate of eMBB users.
Millimeter wave (mmWave) communication has attracted increasing attention as a promising technology for 5G networks. One of the key architectural features of mmWave is the use of massive antenna arrays at both the transmitter and the receiver sides. Therefore, by employing directional beamforming (BF), both mmWave base stations (MBSs) and mmWave users (MUEs) are capable of supporting multi-beam simultaneous transmissions. However, most researches have only considered a single beam, which means that they do not make full potential of mmWave. In this context, in order to improve the performance of short-range indoor mmWave networks with multiple reflections, we investigate the challenges and potential solutions of downlink multi-user multi-beam transmission, which can be described as a high-dimensional (i.e., beamspace) multi-user multiple-input multiple-output (MU-MIMO) technique, including multi-user BF training, simultaneous users grouping, and multi-user multibeam power allocation. Furthermore, we present the theoretical and numerical results to demonstrate that beamspace MU-MIMO compared with single beam transmission can largely improve the rate performance of mmWave systems.
For millimeter-wave networks, this paper presents a paradigm shift for leveraging time-consecutive camera images in handover decision problems. While making handover decisions, it is important to predict future long-term performance---e.g., the cumulative sum of time-varying data rates---proactively to avoid making myopic decisions. However, this study experimentally notices that a time-variation in the received powers is not necessarily informative for proactively predicting the rapid degradation of data rates caused by moving obstacles. To overcome this challenge, this study proposes a proactive framework wherein handover timings are optimized while obstacle-caused data rate degradations are predicted before the degradations occur. The key idea is to expand a state space to involve time consecutive camera images, which comprises informative features for predicting such data rate degradations. To overcome the difficulty in handling the large dimensionality of the expanded state space, we use a deep reinforcement learning for deciding the handover timings. The evaluations performed based on the experimentally obtained camera images and received powers demonstrate that the expanded state space facilitates (i) the prediction of obstacle-caused data rate degradations from 500 ms before the degradations occur and (ii) superior performance to a handover framework without the state space expansion
Millimeter wave channels exhibit structure that allows beam alignment with fewer channel measurements than exhaustive beam search. From a compressed sensing (CS) perspective, the received channel measurements are usually obtained by multiplying a CS matrix with a sparse representation of the channel matrix. Due to the constraints imposed by analog processing, designing CS matrices that efficiently exploit the channel structure is, however, challenging. In this paper, we propose an end-to-end deep learning technique to design a structured CS matrix that is well suited to the underlying channel distribution, leveraging both sparsity and the particular spatial structure that appears in vehicular channels. The channel measurements acquired with the designed CS matrix are then used to predict the best beam for link configuration. Simulation results for vehicular communication channels indicate that our deep learning-based approach achieves better beam alignment than standard CS techniques that use the random phase shift-based design.
This paper considers the remote state estimation in a cyber-physical system (CPS) using multiple sensors. The measurements of each sensor are transmitted to a remote estimator over a shared channel, where simultaneous transmissions from other sensors are regarded as interference signals. In such a competitive environment, each sensor needs to choose its transmission power for sending data packets taking into account of other sensors behavior. To model this interactive decision-making process among the sensors, we introduce a multi-player non-cooperative game framework. To overcome the inefficiency arising from the Nash equilibrium (NE) solution, we propose a correlation policy, along with the notion of correlation equilibrium (CE). An analytical comparison of the game value between the NE and the CE is provided, with/without the power expenditure constraints for each sensor. Also, numerical simulations demonstrate the comparison results.