ترغب بنشر مسار تعليمي؟ اضغط هنا

CL-ADMM: A Cooperative Learning Based Optimization Framework for Resource Management in MEC

118   0   0.0 ( 0 )
 نشر من قبل Xiaoxiong Zhong
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of intelligent and efficient resource management framework in mobile edge computing (MEC), which can reduce delay and energy consumption, featuring distributed optimization and efficient congestion avoidance mechanism. In this paper, we present a Cooperative Learning framework for resource management in MEC from an Alternating Direction Method of Multipliers (ADMM) perspective, called CL-ADMM framework. First, in order to caching task efficiently in a group, a novel task popularity estimating scheme is proposed, which is based on semi-Markov process model, then a greedy task cooperative caching mechanism has been established, which can effectively reduce delay and energy consumption. Secondly, for addressing group congestion, a dynamic task migration scheme based on cooperative improved Q-learning is proposed, which can effectively reduce delay and alleviate congestion. Thirdly, for minimizing delay and energy consumption for resources allocation in a group, we formulate it as an optimization problem with a large number of variables, and then exploit a novel ADMM based scheme to address this problem, which can reduce the complexity of problem with a new set of auxiliary variables, these sub-problems are all convex problems, and can be solved by using a primal-dual approach, guaranteeing its convergences. Then we prove that the convergence by using Lyapunov theory. Numerical results demonstrate the effectiveness of the CL-ADMM and it can effectively reduce delay and energy consumption for MEC.



قيم البحث

اقرأ أيضاً

97 - Haixia Peng , Xuemin Shen 2020
In this paper, we investigate joint vehicle association and multi-dimensional resource management in a vehicular network assisted by multi-access edge computing (MEC) and unmanned aerial vehicle (UAV). To efficiently manage the available spectrum, co mputing, and caching resources for the MEC-mounted base station and UAVs, a resource optimization problem is formulated and carried out at a central controller. Considering the overlong solving time of the formulated problem and the sensitive delay requirements of vehicular applications, we transform the optimization problem using reinforcement learning and then design a deep deterministic policy gradient (DDPG)-based solution. Through training the DDPG-based resource management model offline, optimal vehicle association and resource allocation decisions can be obtained rapidly. Simulation results demonstrate that the DDPG-based resource management scheme can converge within 200 episodes and achieve higher delay/quality-of-service satisfaction ratios than the random scheme.
Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challengi ng technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.
Internet of Things (IoT) is an Internet-based environment of connected devices and applications. IoT creates an environment where physical devices and sensors are flawlessly combined into information nodes to deliver innovative and smart services for human-being to make their life easier and more efficient. The main objective of the IoT devices-network is to generate data, which are converted into useful information by the data analysis process, it also provides useful resources to the end users. IoT resource management is a key challenge to ensure the quality of end user experience. Many IoT smart devices and technologies like sensors, actuators, RFID, UMTS, 3G, and GSM etc. are used to develop IoT networks. Cloud Computing plays an important role in these networks deployment by providing physical resources as virtualized resources consist of memory, computation power, network bandwidth, virtualized system and device drivers in secure and pay as per use basis. One of the major concerns of Cloud-based IoT environment is resource management, which ensures efficient resource utilization, load balancing, reduce SLA violation, and improve the system performance by reducing operational cost and energy consumption. Many researchers have been proposed IoT based resource management techniques. The focus of this paper is to investigate these proposed resource allocation techniques and finds which parameters must be considered for improvement in resource allocation for IoT networks. Further, this paper also uncovered challenges and issues of Cloud-based resource allocation for IoT environment.
In Federated Learning (FL), a global statistical model is developed by encouraging mobile users to perform the model training on their local data and aggregating the output local model parameters in an iterative manner. However, due to limited energy and computation capability at the mobile devices, the performance of the model training is always at stake to meet the objective of local energy minimization. In this regard, Multi-access Edge Computing (MEC)-enabled FL addresses the tradeoff between the model performance and the energy consumption of the mobile devices by allowing users to offload a portion of their local dataset to an edge server for the model training. Since the edge server has high computation capability, the time consumption of the model training at the edge server is insignificant. However, the time consumption for dataset offloading from mobile users to the edge server has a significant impact on the total time consumption. Thus, resource management in MEC-enabled FL is challenging, where the objective is to reduce the total time consumption while saving the energy consumption of the mobile devices. In this paper, we formulate an energy-aware resource management for MEC-enabled FL in which the model training loss and the total time consumption are jointly minimized, while considering the energy limitation of mobile devices. In addition, we recast the formulated problem as a Generalized Nash Equilibrium Problem (GNEP) to capture the coupling constraints between the radio resource management and dataset offloading. We then analyze the impact of the dataset offloading and computing resource allocation on the model training loss, time, and the energy consumption.
We provide a novel solution for Resource Discovery (RD) in mobile device clouds consisting of selfish nodes. Mobile device clouds (MDCs) refer to cooperative arrangement of communication-capable devices formed with resource-sharing goal in mind. Our work is motivated by the observation that with ever-growing applications of MDCs, it is essential to quickly locate resources offered in such clouds, where the resources could be content, computing resources, or communication resources. The current approaches for RD can be categorized into two models: decentralized model, where RD is handled by each node individually; and centralized model, where RD is assisted by centralized entities like cellular network. However, we propose LORD, a Leader-based framewOrk for RD in MDCs which is not only self-organized and not prone to having a single point of failure like the centralized model, but also is able to balance the energy consumption among MDC participants better than the decentralized model. Moreover, we provide a credit-based incentive to motivate participation of selfish nodes in the leader selection process, and present the first energy-aware leader selection mechanism for credit-based models. The simulation results demonstrate that LORD balances energy consumption among nodes and prolongs overall network lifetime compared to decentralized model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا