ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning to Continuously Optimize Wireless Resource in a Dynamic Environment: A Bilevel Optimization Perspective

111   0   0.0 ( 0 )
 نشر من قبل Haoran Sun
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

There has been a growing interest in developing data-driven, and in particular deep neural network (DNN) based methods for modern communication tasks. For a few popular tasks such as power control, beamforming, and MIMO detection, these methods achieve state-of-the-art performance while requiring less computational efforts, less resources for acquiring channel state information (CSI), etc. However, it is often challenging for these approaches to learn in a dynamic environment. This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment. Specifically, we consider an ``episodically dynamic setting where the environment statistics change in ``episodes, and in each episode the environment is stationary. We propose to build the notion of continual learning (CL) into wireless system design, so that the learning model can incrementally adapt to the new episodes, {it without forgetting} knowledge learned from the previous episodes. Our design is based on a novel bilevel optimization formulation which ensures certain ``fairness across different data samples. We demonstrate the effectiveness of the CL approach by integrating it with two popular DNN based models for power control and beamforming, respectively, and testing using both synthetic and ray-tracing based data sets. These numerical results show that the proposed CL approach is not only able to adapt to the new scenarios quickly and seamlessly, but importantly, it also maintains high performance over the previously encountered scenarios as well.

قيم البحث

اقرأ أيضاً

64 - Cong Shen , Jie Xu , Sihui Zheng 2021
We advocate a new resource allocation framework, which we term resource rationing, for wireless federated learning (FL). Unlike existing resource allocation methods for FL, resource rationing focuses on balancing resources across learning rounds so t hat their collective impact on the federated learning performance is explicitly captured. This new framework can be integrated seamlessly with existing resource allocation schemes to optimize the convergence of FL. In particular, a novel later-is-better principle is at the front and center of resource rationing, which is validated empirically in several instances of wireless FL. We also point out technical challenges and research opportunities that are worth pursuing. Resource rationing highlights the benefits of treating the emerging FL as a new class of service that has its own characteristics, and designing communication algorithms for this particular service.
We study wireless power transmission by an energy source to multiple energy harvesting nodes with the aim to maximize the energy efficiency. The source transmits energy to the nodes using one of the available power levels in each time slot and the no des transmit information back to the energy source using the harvested energy. The source does not have any channel state information and it only knows whether a received codeword from a given node was successfully decoded or not. With this limited information, the source has to learn the optimal power level that maximizes the energy efficiency of the network. We model the problem as a stochastic Multi-Armed Bandits problem and develop an Upper Confidence Bound based algorithm, which learns the optimal transmit power of the energy source that maximizes the energy efficiency. Numerical results validate the performance guarantees of the proposed algorithm and show significant gains compared to the benchmark schemes.
In this paper, the problem of minimizing energy and time consumption for task computation and transmission is studied in a mobile edge computing (MEC)-enabled balloon network. In the considered network, each user needs to process a computational task in each time instant, where high-altitude balloons (HABs), acting as flying wireless base stations, can use their powerful computational abilities to process the tasks offloaded from their associated users. Since the data size of each users computational task varies over time, the HABs must dynamically adjust the user association, service sequence, and task partition scheme to meet the users needs. This problem is posed as an optimization problem whose goal is to minimize the energy and time consumption for task computing and transmission by adjusting the user association, service sequence, and task allocation scheme. To solve this problem, a support vector machine (SVM)-based federated learning (FL) algorithm is proposed to determine the user association proactively. The proposed SVM-based FL method enables each HAB to cooperatively build an SVM model that can determine all user associations without any transmissions of either user historical associations or computational tasks to other HABs. Given the prediction of the optimal user association, the service sequence and task allocation of each user can be optimized so as to minimize the weighted sum of the energy and time consumption. Simulations with real data of city cellular traffic from the OMNILab at Shanghai Jiao Tong University show that the proposed algorithm can reduce the weighted sum of the energy and time consumption of all users by up to 16.1% compared to a conventional centralized method.
In this paper, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile edge computing (MEC) servers to jointly provide computational and communicat ion services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multi-stack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users information in its multiple stacks to avoid learning the same resource allocation scheme and users states, thus improving the convergence speed and learning efficiency. Simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
166 - Yifu Yang , Gang Wu , Weidang Lu 2020
A Load Balancing Relay Algorithm (LBRA) was proposed to solve the unfair spectrum resource allocation in the traditional mobile MTC relay. In order to obtain reasonable use of spectrum resources, and a balanced MTC devices (MTCDs) distribution, spect rum resources are dynamically allocated by MTCDs regrouped on the MTCD to MTC gateway link. Moreover, the system outage probability and transmission capacity are derived when using LBRA. The numerical results show that the proposed algorithm has better performance in transmission capacity and outage probability than the traditional method. LBRA had an increase in transmission capacity of about 0.7dB, and an improvement in outage probability of about 0.8dB with a high MTCD density.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا