ترغب بنشر مسار تعليمي؟ اضغط هنا

An Optimal-Transport-Based Reinforcement Learning Approach for Computation Offloading

88   0   0.0 ( 0 )
 نشر من قبل Zhuo Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the mass deployment of computing-intensive applications and delay-sensitive applications on end devices, only adequate computing resources can meet differentiated services delay requirements. By offloading tasks to cloud servers or edge servers, computation offloading can alleviate computing and storage limitations and reduce delay and energy consumption. However, few of the existing offloading schemes take into consideration the cloud-edge collaboration and the constraint of energy consumption and task dependency. This paper builds a collaborative computation offloading model in cloud and edge computing and formulates a multi-objective optimization problem. Constructed by fusing optimal transport and Policy-Based RL, we propose an Optimal-Transport-Based RL approach to resolve the offloading problem and make the optimal offloading decision for minimizing the overall cost of delay and energy consumption. Simulation results show that the proposed approach can effectively reduce the cost and significantly outperforms existing optimization solutions.



قيم البحث

اقرأ أيضاً

78 - Zhuo Li , Xu Zhou , Yang Liu 2021
As novel applications spring up in future network scenarios, the requirements on network service capabilities for differentiated services or burst services are diverse. Aiming at the research of collaborative computing and resource allocation in edge scenarios, migrating computing tasks to the edge and cloud for computing requires a comprehensive consideration of energy consumption, bandwidth, and delay. Our paper proposes a collaboration mechanism based on computation offloading, which is flexible and customizable to meet the diversified requirements of differentiated networks. This mechanism handles the terminals differentiated computing tasks by establishing a collaborative computation offloading model between the cloud server and edge server. Experiments show that our method has more significant improvements over regular optimization algorithms, including reducing the execution time of computing tasks, improving the utilization of server resources, and decreasing the terminals energy consumption.
Internet of Things (IoT) is considered as the enabling platform for a variety of promising applications, such as smart transportation and smart city, where massive devices are interconnected for data collection and processing. These IoT applications pose a high demand on storage and computing capacity, while the IoT devices are usually resource-constrained. As a potential solution, mobile edge computing (MEC) deploys cloud resources in the proximity of IoT devices so that their requests can be better served locally. In this work, we investigate computation offloading in a dynamic MEC system with multiple edge servers, where computational tasks with various requirements are dynamically generated by IoT devices and offloaded to MEC servers in a time-varying operating environment (e.g., channel condition changes over time). The objective of this work is to maximize the completed tasks before their respective deadlines and minimize energy consumption. To this end, we propose an end-to-end Deep Reinforcement Learning (DRL) approach to select the best edge server for offloading and allocate the optimal computational resource such that the expected long-term utility is maximized. The simulation results are provided to demonstrate that the proposed approach outperforms the existing methods.
95 - Dali Zhu , Haitao Liu , Ting Li 2021
In remote regions (e.g., mountain and desert), cellular networks are usually sparsely deployed or unavailable. With the appearance of new applications (e.g., industrial automation and environment monitoring) in remote regions, resource-constrained te rminals become unable to meet the latency requirements. Meanwhile, offloading tasks to urban terrestrial cloud (TC) via satellite link will lead to high delay. To tackle above issues, Satellite Edge Computing architecture is proposed, i.e., users can offload computing tasks to visible satellites for executing. However, existing works are usually limited to offload tasks in pure satellite networks, and make offloading decisions based on the predefined models of users. Besides, the runtime consumption of existing algorithms is rather high. In this paper, we study the task offloading problem in satellite-terrestrial edge computing networks, where tasks can be executed by satellite or urban TC. The proposed Deep Reinforcement learning-based Task Offloading (DRTO) algorithm can accelerate learning process by adjusting the number of candidate locations. In addition, offloading location and bandwidth allocation only depend on the current channel states. Simulation results show that DRTO achieves near-optimal offloading cost performance with much less runtime consumption, which is more suitable for satellite-terrestrial network with fast fading channel.
For current and future Internet of Things (IoT) networks, mobile edge-cloud computation offloading (MECCO) has been regarded as a promising means to support delay-sensitive IoT applications. However, offloading mobile tasks to the cloud is vulnerable to security issues due to malicious mobile devices (MDs). How to implement offloading to alleviate computation burdens at MDs while guaranteeing high security in mobile edge cloud is a challenging problem. In this paper, we investigate simultaneously the security and computation offloading problems in a multi-user MECCO system with blockchain. First, to improve the offloading security, we propose a trustworthy access control using blockchain, which can protect cloud resources against illegal offloading behaviours. Then, to tackle the computation management of authorized MDs, we formulate a computation offloading problem by jointly optimizing the offloading decisions, the allocation of computing resource and radio bandwidth, and smart contract usage. This optimization problem aims to minimize the long-term system costs of latency, energy consumption and smart contract fee among all MDs. To solve the proposed offloading problem, we develop an advanced deep reinforcement learning algorithm using a double-dueling Q-network. Evaluation results from real experiments and numerical simulations demonstrate the significant advantages of our scheme over existing approaches.
In intelligent transportation systems (ITS), vehicles are expected to feature with advanced applications and services which demand ultra-high data rates and low-latency communications. For that, the millimeter wave (mmWave) communication has been eme rging as a very promising solution. However, incorporating the mmWave into ITS is particularly challenging due to the high mobility of vehicles and the inherent sensitivity of mmWave beams to dynamic blockages. This article addresses these problems by developing an optimal beam association framework for mmWave vehicular networks under high mobility. Specifically, we use the semi-Markov decision process to capture the dynamics and uncertainty of the environment. The Q-learning algorithm is then often used to find the optimal policy. However, Q-learning is notorious for its slow-convergence. Instead of adopting deep reinforcement learning structures (like most works in the literature), we leverage the fact that there are usually multiple vehicles on the road to speed up the learning process. To that end, we develop a lightweight yet very effective parallel Q-learning algorithm to quickly obtain the optimal policy by simultaneously learning from various vehicles. Extensive simulations demonstrate that our proposed solution can increase the data rate by 47% and reduce the disconnection probability by 29% compared to other solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا