ترغب بنشر مسار تعليمي؟ اضغط هنا

Energy-aware Resource Management for Federated Learning in Multi-access Edge Computing Systems

316   0   0.0 ( 0 )
 نشر من قبل Chit Wutyee Zaw
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In Federated Learning (FL), a global statistical model is developed by encouraging mobile users to perform the model training on their local data and aggregating the output local model parameters in an iterative manner. However, due to limited energy and computation capability at the mobile devices, the performance of the model training is always at stake to meet the objective of local energy minimization. In this regard, Multi-access Edge Computing (MEC)-enabled FL addresses the tradeoff between the model performance and the energy consumption of the mobile devices by allowing users to offload a portion of their local dataset to an edge server for the model training. Since the edge server has high computation capability, the time consumption of the model training at the edge server is insignificant. However, the time consumption for dataset offloading from mobile users to the edge server has a significant impact on the total time consumption. Thus, resource management in MEC-enabled FL is challenging, where the objective is to reduce the total time consumption while saving the energy consumption of the mobile devices. In this paper, we formulate an energy-aware resource management for MEC-enabled FL in which the model training loss and the total time consumption are jointly minimized, while considering the energy limitation of mobile devices. In addition, we recast the formulated problem as a Generalized Nash Equilibrium Problem (GNEP) to capture the coupling constraints between the radio resource management and dataset offloading. We then analyze the impact of the dataset offloading and computing resource allocation on the model training loss, time, and the energy consumption.



قيم البحث

اقرأ أيضاً

Edge machine learning involves the deployment of learning algorithms at the network edge to leverage massive distributed data and computation resources to train artificial intelligence (AI) models. Among others, the framework of federated edge learni ng (FEEL) is popular for its data-privacy preservation. FEEL coordinates global model training at an edge server and local model training at edge devices that are connected by wireless links. This work contributes to the energy-efficient implementation of FEEL in wireless networks by designing joint computation-and-communication resource management ($text{C}^2$RM). The design targets the state-of-the-art heterogeneous mobile architecture where parallel computing using both a CPU and a GPU, called heterogeneous computing, can significantly improve both the performance and energy efficiency. To minimize the sum energy consumption of devices, we propose a novel $text{C}^2$RM framework featuring multi-dimensional control including bandwidth allocation, CPU-GPU workload partitioning and speed scaling at each device, and $text{C}^2$ time division for each link. The key component of the framework is a set of equilibriums in energy rates with respect to different control variables that are proved to exist among devices or between processing units at each device. The results are applied to designing efficient algorithms for computing the optimal $text{C}^2$RM policies faster than the standard optimization tools. Based on the equilibriums, we further design energy-efficient schemes for device scheduling and greedy spectrum sharing that scavenges spectrum holes resulting from heterogeneous $text{C}^2$ time divisions among devices. Using a real dataset, experiments are conducted to demonstrate the effectiveness of $text{C}^2$RM on improving the energy efficiency of a FEEL system.
Edge computing is an emerging solution to support the future Internet of Things (IoT) applications that are delay-sensitive, processing-intensive or that require closer intelligence. Machine intelligence and data-driven approaches are envisioned to b uild future Edge-IoT systems that satisfy IoT devices demands for edge resources. However, significant challenges and technical barriers exist which complicate the resource management for such Edge-IoT systems. IoT devices running various applications can demonstrate a wide range of behaviors in the devices resource demand that are extremely difficult to manage. In addition, the management of multidimensional resources fairly and efficiently by the edge in such a setting is a challenging task. In this paper, we develop a novel data-driven resource management framework named BEHAVE that intelligently and fairly allocates edge resources to heterogeneous IoT devices with consideration of their behavior of resource demand (BRD). BEHAVE aims to holistically address the management technical barriers by: 1) building an efficient scheme for modeling and assessment of the BRD of IoT devices based on their resource requests and resource usage; 2) expanding a new Rational, Fair, and Truthful Resource Allocation (RFTA) model that binds the devices BRD and resource allocation to achieve fair allocation and encourage truthfulness in resource demand; and 3) developing an enhanced deep reinforcement learning (EDRL) scheme to achieve the RFTA goals. The evaluation results demonstrate BEHAVEs capability to analyze the IoT devices BRD and adjust its resource management policy accordingly.
Ultra-dense deployments in 5G, the next generation of cellular networks, are an alternative to provide ultra-high throughput by bringing the users closer to the base stations. On the other hand, 5G deployments must not incur a large increase in energ y consumption in order to keep them cost-effective and most importantly to reduce the carbon footprint of cellular networks. We propose a reinforcement learning cell switching algorithm, to minimize the energy consumption in ultra-dense deployments without compromising the quality of service (QoS) experienced by the users. In this regard, the proposed algorithm can intelligently learn which small cells (SCs) to turn off at any given time based on the traffic load of the SCs and the macro cell. To validate the idea, we used the open call detail record (CDR) data set from the city of Milan, Italy, and tested our algorithm against typical operational benchmark solutions. With the obtained results, we demonstrate exactly when and how the proposed algorithm can provide energy savings, and moreover how this happens without reducing QoS of users. Most importantly, we show that our solution has a very similar performance to the exhaustive search, with the advantage of being scalable and less complex.
160 - Mushu Li , Nan Cheng , Jie Gao 2020
In this paper, we study unmanned aerial vehicle (UAV) assisted mobile edge computing (MEC) with the objective to optimize computation offloading with minimum UAV energy consumption. In the considered scenario, a UAV plays the role of an aerial cloudl et to collect and process the computation tasks offloaded by ground users. Given the service requirements of users, we aim to maximize UAV energy efficiency by jointly optimizing the UAV trajectory, the user transmit power, and computation load allocation. The resulting optimization problem corresponds to nonconvex fractional programming, and the Dinkelbach algorithm and the successive convex approximation (SCA) technique are adopted to solve it. Furthermore, we decompose the problem into multiple subproblems for distributed and parallel problem solving. To cope with the case when the knowledge of user mobility is limited, we adopt a spatial distribution estimation technique to predict the location of ground users so that the proposed approach can still be applied. Simulation results demonstrate the effectiveness of the proposed approach for maximizing the energy efficiency of UAV.
5G is regarded as a revolutionary mobile network, which is expected to satisfy a vast number of novel services, ranging from remote health care to smart cities. However, heterogeneous Quality of Service (QoS) requirements of different services and li mited spectrum make the radio resource allocation a challenging problem in 5G. In this paper, we propose a multi-agent reinforcement learning (MARL) method for radio resource slicing in 5G. We model each slice as an intelligent agent that competes for limited radio resources, and the correlated Q-learning is applied for inter-slice resource block (RB) allocation. The proposed correlated Q-learning based interslice RB allocation (COQRA) scheme is compared with Nash Q-learning (NQL), Latency-Reliability-Throughput Q-learning (LRTQ) methods, and the priority proportional fairness (PPF) algorithm. Our simulation results show that the proposed COQRA achieves 32.4% lower latency and 6.3% higher throughput when compared with LRTQ, and 5.8% lower latency and 5.9% higher throughput than NQL. Significantly higher throughput and lower packet drop rate (PDR) is observed in comparison to PPF.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا