Do you want to publish a course? Click here

AI Empowered Resource Management for Future Wireless Networks

293   0   0.0 ( 0 )
 Added by Yifei Shen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Resource management plays a pivotal role in wireless networks, which, unfortunately, leads to challenging NP-hard problems. Artificial Intelligence (AI), especially deep learning techniques, has recently emerged as a disruptive technology to solve such challenging problems in a real-time manner. However, although promising results have been reported, practical design guidelines and performance guarantees of AI-based approaches are still missing. In this paper, we endeavor to address two fundamental questions: 1) What are the main advantages of AI-based methods compared with classical techniques; and 2) Which neural network should we choose for a given resource management task. For the first question, four advantages are identified and discussed. For the second question, emph{optimality gap}, i.e., the gap to the optimal performance, is proposed as a measure for selecting model architectures, as well as, for enabling a theoretical comparison between different AI-based approaches. Specifically, for $K$-user interference management problem, we theoretically show that graph neural networks (GNNs) are superior to multi-layer perceptrons (MLPs), and the performance gap between these two methods grows with $sqrt{K}$.



rate research

Read More

Wireless power transfer (WPT) is an emerging paradigm that will enable using wireless to its full potential in future networks, not only to convey information but also to deliver energy. Such networks will enable trillions of future low-power devices to sense, compute, connect, and energize anywhere, anytime, and on the move. The design of such future networks brings new challenges and opportunities for signal processing, machine learning, sensing, and computing so as to make the best use of the RF radiations, spectrum, and network infrastructure in providing cost-effective and real-time power supplies to wireless devices and enable wireless-powered applications. In this paper, we first review recent signal processing techniques to make WPT and wireless information and power transfer as efficient as possible. Topics include power amplifier and energy harvester nonlinearities, active and passive beamforming, intelligent reflecting surfaces, receive combining with multi-antenna harvester, modulation, coding, waveform, massive MIMO, channel acquisition, transmit diversity, multi-user power region characterization, coordinated multipoint, and distributed antenna systems. Then, we overview two different design methodologies: the model and optimize approach relying on analytical system models, modern convex optimization, and communication theory, and the learning approach based on data-driven end-to-end learning and physics-based learning. We discuss the pros and cons of each approach, especially when accounting for various nonlinearities in wireless-powered networks, and identify interesting emerging opportunities for the approaches to complement each other. Finally, we identify new emerging wireless technologies where WPT may play a key role -- wireless-powered mobile edge computing and wireless-powered sensing -- arguing WPT, communication, computation, and sensing must be jointly designed.
We consider a fully-loaded ground wireless network supporting unmanned aerial vehicle (UAV) transmission services. To enable the overload transmissions to a ground user (GU) and a UAV, two transmission schemes are employed, namely non-orthogonal multiple access (NOMA) and relaying, depending on whether or not the GU and UAV are served simultaneously. Under the assumption of the system operating with infinite blocklength (IBL) codes, the IBL throughputs of both the GU and the UAV are derived under the two schemes. More importantly, we also consider the scenario in which data packets are transmitted via finite blocklength (FBL) codes, i.e., data transmission to both the UAV and the GU is performed under low-latency and high reliability constraints. In this setting, the FBL throughputs are characterized again considering the two schemes of NOMA and relaying. Following the IBL and FBL throughput characterizations, optimal resource allocation designs are subsequently proposed to maximize the UAV throughput while guaranteeing the throughput of the cellular user.Moreover, we prove that the relaying scheme is able to provide transmission service to the UAV while improving the GUs performance, and that the relaying scheme potentially offers a higher throughput to the UAV in the FBL regime than in the IBL regime. On the other hand, the NOMA scheme provides a higher UAV throughput (than relaying) by slightly sacrificing the GUs performance.
In this article, we first present the vision, key performance indicators, key enabling techniques (KETs), and services of 6G wireless networks. Then, we highlight a series of general resource management (RM) challenges as well as unique RM challenges corresponding to each KET. The unique RM challenges in 6G necessitate the transformation of existing optimization-based solutions to artificial intelligence/machine learning-empowered solutions. In the sequel, we formulate a joint network selection and subchannel allocation problem for 6G multi-band network that provides both further enhanced mobile broadband (FeMBB) and extreme ultra reliable low latency communication (eURLLC) services to the terrestrial and aerial users. Our solution highlights the efficacy of multi-band network and demonstrates the robustness of dueling deep Q-learning in obtaining efficient RM solution with faster convergence rate compared to deep-Q network and double deep Q-network algorithms.
In this paper, the adoption of an intelligent reflecting surface (IRS) for multiple single-antenna source terminal (ST)-DT pairs in two-hop networks is investigated. Different from the previous studies on IRS that merely focused on tuning the reflection coefficient of all the reflection elements at IRS, in this paper, we consider the true reflection resource management. Specifically, the true reflection resource management can be realized via trigger module selection based on our proposed IRS architecture that all the reflection elements are partially controlled by multiple parallel switches of controller. As the number of reflection elements increases, the true reflection resource management will become urgently needed in this context, which is due to the non-ignorable energy consumption. Moreover, the proposed modular architecture of IRS is designed to make the reflection elements part independent and controllable. As such, our goal is to maximize the minimum signal-to-interference-plus-noise ratio (SINR) at DTs via a joint trigger module subset selection, transmit power allocation of STs, and the corresponding passive beamforming of the trigger modules, subject to per ST power budgets and module size constraint. Whereas this problem is NP-hard due to the module size constraint, to deal with it, we transform the hard module size constraint into the group sparse constraint by introducing the mixed row block norm, which yields a suitable semidefinite relaxation. Additionally, the parallel alternating direction method of multipliers (PADMM) is proposed to identify the trigger module subset, and then subsequently the transmit power allocation and passive beamforming can be obtained by solving the original minimum SINR maximization problem without the group sparse constraint via partial linearization for generalized fractional programs.
Edge machine learning involves the deployment of learning algorithms at the network edge to leverage massive distributed data and computation resources to train artificial intelligence (AI) models. Among others, the framework of federated edge learning (FEEL) is popular for its data-privacy preservation. FEEL coordinates global model training at an edge server and local model training at edge devices that are connected by wireless links. This work contributes to the energy-efficient implementation of FEEL in wireless networks by designing joint computation-and-communication resource management ($text{C}^2$RM). The design targets the state-of-the-art heterogeneous mobile architecture where parallel computing using both a CPU and a GPU, called heterogeneous computing, can significantly improve both the performance and energy efficiency. To minimize the sum energy consumption of devices, we propose a novel $text{C}^2$RM framework featuring multi-dimensional control including bandwidth allocation, CPU-GPU workload partitioning and speed scaling at each device, and $text{C}^2$ time division for each link. The key component of the framework is a set of equilibriums in energy rates with respect to different control variables that are proved to exist among devices or between processing units at each device. The results are applied to designing efficient algorithms for computing the optimal $text{C}^2$RM policies faster than the standard optimization tools. Based on the equilibriums, we further design energy-efficient schemes for device scheduling and greedy spectrum sharing that scavenges spectrum holes resulting from heterogeneous $text{C}^2$ time divisions among devices. Using a real dataset, experiments are conducted to demonstrate the effectiveness of $text{C}^2$RM on improving the energy efficiency of a FEEL system.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا