No Arabic abstract
The paper describes an online deep learning algorithm for the adaptive modulation and coding in 5G Massive MIMO. The algorithm is based on a fully connected neural network, which is initially trained on the output of the traditional algorithm and then is incrementally retrained by the service feedback of its output. We show the advantage of our solution over the state-of-the-art Q-Learning approach. We provide system-level simulation results to support this conclusion in various scenarios with different channel characteristics and different user speeds. Compared with traditional OLLA our algorithm shows 10% to 20% improvement of user throughput in full buffer case.
In this paper, we consider massive multiple-input-multiple-output (MIMO) communication systems with a uniform planar array (UPA) at the base station (BS) and investigate the downlink precoding with imperfect channel state information (CSI). By exploiting both instantaneous and statistical CSI, we aim to design precoding vectors to maximize the ergodic rate (e.g., sum rate, minimum rate and etc.) subject to a total transmit power constraint. To maximize an upper bound of the ergodic rate, we leverage the corresponding Lagrangian formulation and identify the structural characteristics of the optimal precoder as the solution to a generalized eigenvalue problem. As such, the high-dimensional precoder design problem turns into a low-dimensional power control problem. The Lagrange multipliers play a crucial role in determining both precoder directions and power parameters, yet are challenging to be solved directly. To figure out the Lagrange multipliers, we develop a general framework underpinned by a properly designed neural network that learns directly from CSI. To further relieve the computational burden, we obtain a low-complexity framework by decomposing the original problem into computationally efficient subproblems with instantaneous and statistical CSI handled separately. With the off-line pretrained neural network, the online computational complexity of precoding is substantially reduced compared with the existing iterative algorithm while maintaining nearly the same performance.
In this work, we study underlay radar-massive MIMO cellular coexistence in LoS/near-LoS channels, where both systems have 3D beamforming capabilities. Using mathematical tools from stochastic geometry, we derive an upper bound on the average interference power at the radar due to the 3D massive MIMO cellular downlink under the worst-case `cell-edge beamforming conditions. To overcome the technical challenges imposed by asymmetric and arbitrarily large cells, we devise a novel construction in which each Poisson Voronoi (PV) cell is bounded by its circumcircle to bound the effect of the random cell shapes on average interference. Since this model is intractable for further analysis due to the correlation between adjacent PV cells shapes and sizes, we propose a tractable nominal interference model, where we model each PV cell as a circular disk with an area equal to the average area of the typical cell. We quantify the gap in the average interference power between these two models and show that the upper bound is tight for realistic deployment parameters. We also compare them with a more practical but intractable MU-MIMO scheduling model to show that our worst-case interference models show the same trends and do not deviate significantly from realistic scheduler models. Under the nominal interference model, we characterize the interference distribution using the dominant interferer approximation by deriving the equi-interference contour expression when the typical receiver uses 3D beamforming. Finally, we use tractable expressions for the interference distribution to characterize radars spatial probability of false alarm/detection in a quasi-static target tracking scenario. Our results reveal useful trends in the average interference as a function of the deployment parameters (BS density, exclusion zone radius, antenna height, transmit power of each BS, etc.).
Vehicular edge computing (VEC) is envisioned as a promising approach to process the explosive computation tasks of vehicular user (VU). In the VEC system, each VU allocates power to process partial tasks through offloading and the remaining tasks through local execution. During the offloading, each VU adopts the multi-input multi-out and non-orthogonal multiple access (MIMO-NOMA) channel to improve the channel spectrum efficiency and capacity. However, the channel condition is uncertain due to the channel interference among VUs caused by the MIMO-NOMA channel and the time-varying path-loss caused by the mobility of each VU. In addition, the task arrival of each VU is stochastic in the real world. The stochastic task arrival and uncertain channel condition affect greatly on the power consumption and latency of tasks for each VU. It is critical to design an optimal power allocation scheme considering the stochastic task arrival and channel variation to optimize the long-term reward including the power consumption and latency in the MIMO-NOMA VEC. Different from the traditional centralized deep reinforcement learning (DRL)-based scheme, this paper constructs a decentralized DRL framework to formulate the power allocation optimization problem, where the local observations are selected as the state. The deep deterministic policy gradient (DDPG) algorithm is adopted to learn the optimal power allocation scheme based on the decentralized DRL framework. Simulation results demonstrate that our proposed power allocation scheme outperforms the existing schemes.
Channel estimation is one of the key issues in practical massive multiple-input multiple-output (MIMO) systems. Compared with conventional estimation algorithms, deep learning (DL) based ones have exhibited great potential in terms of performance and complexity. In this paper, an attention mechanism, exploiting the channel distribution characteristics, is proposed to improve the estimation accuracy of highly separable channels with narrow angular spread by realizing the divide-and-conquer policy. Specifically, we introduce a novel attention-aided DL channel estimation framework for conventional massive MIMO systems and devise an embedding method to effectively integrate the attention mechanism into the fully connected neural network for the hybrid analog-digital (HAD) architecture. Simulation results show that in both scenarios, the channel estimation performance is significantly improved with the aid of attention at the cost of small complexity overhead. Furthermore, strong robustness under different system and channel parameters can be achieved by the proposed approach, which further strengthens its practical value. We also investigate the distributions of learned attention maps to reveal the role of attention, which endows the proposed approach with a certain degree of interpretability.
Massive multiuser multiple-input multiple-output (MU-MIMO) has been the mainstream technology in fifth-generation wireless systems. To reduce high hardware costs and power consumption in massive MU-MIMO, low-resolution digital-to-analog converters (DAC) for each antenna and radio frequency (RF) chain in downlink transmission is used, which brings challenges for precoding design. To circumvent these obstacles, we develop a model-driven deep learning (DL) network for massive MU-MIMO with finite-alphabet precoding in this article. The architecture of the network is specially designed by unfolding an iterative algorithm. Compared with the traditional state-of-the-art techniques, the proposed DL-based precoder shows significant advantages in performance, complexity, and robustness to channel estimation error under Rayleigh fading channel.