No Arabic abstract
Unmanned aerial vehicles (UAVs) are emerging in commercial spaces and will support many applications and services, such as smart agriculture, dynamic network deployment, and network coverage extension, surveillance and security. The unmanned aircraft system (UAS) traffic management (UTM) provides a framework for safe UAV operation integrating UAV controllers and central data bases via a communications network. This paper discusses the challenges and opportunities for machine learning (ML) for effectively providing critical UTM services. We introduce the four pillars of UTM---operation planning, situational awareness, status and advisors and security---and discuss the main services, specific opportunities for ML and the ongoing research. We conclude that the multi-faceted operating environment and operational parameters will benefit from collected data and data-driven algorithms, as well as online learning to face new UAV operation situations.
We consider the relaying application of unmanned aerial vehicles (UAVs), in which UAVs are placed between two transceivers (TRs) to increase the throughput of the system. Instead of studying the placement of UAVs as pursued in existing literature, we focus on investigating the placement of a jammer or a major source of interference on the ground to effectively degrade the performance of the system, which is measured by the maximum achievable data rate of transmission between the TRs. We demonstrate that the optimal placement of the jammer is in general a non-convex optimization problem, for which obtaining the solution directly is intractable. Afterward, using the inherent characteristics of the signal-to-interference ratio (SIR) expressions, we propose a tractable approach to find the optimal position of the jammer. Based on the proposed approach, we investigate the optimal positioning of the jammer in both dual-hop and multi-hop UAV relaying settings. Numerical simulations are provided to evaluate the performance of our proposed method.
Deployment of unmanned aerial vehicles (UAVs) is recently getting significant attention due to a variety of practical use cases, such as surveillance, data gathering, and commodity delivery. Since UAVs are powered by batteries, energy efficient communication is of paramount importance. In this paper, we investigate the problem of lifetime maximization of a UAV-assisted network in the presence of multiple sources of interference, where the UAVs are deployed to collect data from a set of wireless sensors. We demonstrate that the placement of the UAVs play a key role in prolonging the lifetime of the network since the required transmission powers of the UAVs are closely related to their locations in space. In the proposed scenario, the UAVs transmit the gathered data to a primary UAV called textit{leader}, which is in charge of forwarding the data to the base station (BS) via a backhaul UAV network. We deploy tools from spectral graph theory to tackle the problem due to its high non-convexity. Simulation results demonstrate that our proposed method can significantly improve the lifetime of the UAV network.
As the integration of unmanned aerial vehicles (UAVs) into visible light communications (VLC) can offer many benefits for massive-connectivity applications and services in 5G and beyond, this work considers a UAV-assisted VLC using non-orthogonal multiple-access. More specifically, we formulate a joint problem of power allocation and UAVs placement to maximize the sum rate of all users, subject to constraints on power allocation, quality of service of users, and UAVs position. Since the problem is non-convex and NP-hard in general, it is difficult to be solved optimally. Moreover, the problem is not easy to be solved by conventional approaches, e.g., coordinate descent algorithms, due to channel modeling in VLC. Therefore, we propose using harris hawks optimization (HHO) algorithm to solve the formulated problem and obtain an efficient solution. We then use the HHO algorithm together with artificial neural networks to propose a design which can be used in real-time applications and avoid falling into the local minima trap in conventional trainers. Numerical results are provided to verify the effectiveness of the proposed algorithm and further demonstrate that the proposed algorithm/HHO trainer is superior to several alternative schemes and existing metaheuristic algorithms.
Cellular-connected wireless connectivity provides new opportunities for virtual reality(VR) to offer seamless user experience from anywhere at anytime. To realize this vision, the quality-of-service (QoS) for wireless VR needs to be carefully defined to reflect human perception requirements. In this paper, we first identify the primary drivers of VR systems, in terms of applications and use cases. We then map the human perception requirements to corresponding QoS requirements for four phases of VR technology development. To shed light on how to provide short/long-range mobility for VR services, we further list four main use cases for cellular-connected wireless VR and identify their unique research challenges along with their corresponding enabling technologies and solutions in 5G systems and beyond. Last but not least, we present a case study to demonstrate the effectiveness of our proposed solution and the unique QoS performance requirements of VR transmission compared with that of traditional video service in cellular networks.
Wireless cellular networks have many parameters that are normally tuned upon deployment and re-tuned as the network changes. Many operational parameters affect reference signal received power (RSRP), reference signal received quality (RSRQ), signal-to-interference-plus-noise-ratio (SINR), and, ultimately, throughput. In this paper, we develop and compare two approaches for maximizing coverage and minimizing interference by jointly optimizing the transmit power and downtilt (elevation tilt) settings across sectors. To evaluate different parameter configurations offline, we construct a realistic simulation model that captures geographic correlations. Using this model, we evaluate two optimization methods: deep deterministic policy gradient (DDPG), a reinforcement learning (RL) algorithm, and multi-objective Bayesian optimization (BO). Our simulations show that both approaches significantly outperform random search and converge to comparable Pareto frontiers, but that BO converges with two orders of magnitude fewer evaluations than DDPG. Our results suggest that data-driven techniques can effectively self-optimize coverage and capacity in cellular networks.