ترغب بنشر مسار تعليمي؟ اضغط هنا

A Computation Offloading Model over Collaborative Cloud-Edge Networks with Optimal Transport Theory

79   0   0.0 ( 0 )
 نشر من قبل Zhuo Li
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As novel applications spring up in future network scenarios, the requirements on network service capabilities for differentiated services or burst services are diverse. Aiming at the research of collaborative computing and resource allocation in edge scenarios, migrating computing tasks to the edge and cloud for computing requires a comprehensive consideration of energy consumption, bandwidth, and delay. Our paper proposes a collaboration mechanism based on computation offloading, which is flexible and customizable to meet the diversified requirements of differentiated networks. This mechanism handles the terminals differentiated computing tasks by establishing a collaborative computation offloading model between the cloud server and edge server. Experiments show that our method has more significant improvements over regular optimization algorithms, including reducing the execution time of computing tasks, improving the utilization of server resources, and decreasing the terminals energy consumption.



قيم البحث

اقرأ أيضاً

87 - Zhuo Li , Xu Zhou , Taixin Li 2021
With the mass deployment of computing-intensive applications and delay-sensitive applications on end devices, only adequate computing resources can meet differentiated services delay requirements. By offloading tasks to cloud servers or edge servers, computation offloading can alleviate computing and storage limitations and reduce delay and energy consumption. However, few of the existing offloading schemes take into consideration the cloud-edge collaboration and the constraint of energy consumption and task dependency. This paper builds a collaborative computation offloading model in cloud and edge computing and formulates a multi-objective optimization problem. Constructed by fusing optimal transport and Policy-Based RL, we propose an Optimal-Transport-Based RL approach to resolve the offloading problem and make the optimal offloading decision for minimizing the overall cost of delay and energy consumption. Simulation results show that the proposed approach can effectively reduce the cost and significantly outperforms existing optimization solutions.
Mobile devices supporting the Internet of Things (IoT), often have limited capabilities in computation, battery energy, and storage space, especially to support resource-intensive applications involving virtual reality (VR), augmented reality (AR), m ultimedia delivery and artificial intelligence (AI), which could require broad bandwidth, low response latency and large computational power. Edge cloud or edge computing is an emerging topic and technology that can tackle the deficiency of the currently centralized-only cloud computing model and move the computation and storage resource closer to the devices in support of the above-mentioned applications. To make this happen, efficient coordination mechanisms and offloading algorithms are needed to allow the mobile devices and the edge cloud to work together smoothly. In this survey paper, we investigate the key issues, methods, and various state-of-the-art efforts related to the offloading problem. We adopt a new characterizing model to study the whole process of offloading from mobile devices to the edge cloud. Through comprehensive discussions, we aim to draw an overall big picture on the existing efforts and research directions. Our study also indicates that the offloading algorithms in edge cloud have demonstrated profound potentials for future technology and application development.
Recent advances in Low-Power Wide-Area Networks have mitigated interference by using cloud assistance. Those methods transmit the RSSI samples and corrupted packets to the cloud to restore the correct message. However, the effectiveness of those meth ods is challenged by the high transmission data amount. This paper presents a novel method for interference mitigation in a Edge-Cloud collaborative manner, namely ECCR. It does not require transmitting RSSI sample any more, whose length is eight times of the packets. We demonstrate the disjointness of the bit errors of packets at the base stations via real-word experiments. ECCR leverages this to collaborate with multiple base stations for error recovery. Each base station detects and reports bit error locations to the cloud, then both error checking code and interfered packets from other receivers are utilized to restore correct packets. ECCR takes the advantages of both the global management ability of the cloud and the signal to perceive the benefit of each base station, and it is applicable to deployed LP-WAN systems (e.g. sx1280) without any extra hardware requirement. Experimental results show that ECCR is able to accurately decode packets when packets have nearly 51.76% corruption.
Coded distributed computing (CDC) has emerged as a promising approach because it enables computation tasks to be carried out in a distributed manner while mitigating straggler effects, which often account for the long overall completion times. Specif ically, by using polynomial codes, computed results from only a subset of edge servers can be used to reconstruct the final result. However, incentive issues have not been studied systematically for the edge servers to complete the CDC tasks. In this paper, we propose a tractable two-level game-theoretic approach to incentivize the edge servers to complete the CDC tasks. Specifically, in the lower level, a hedonic coalition formation game is formulated where the edge servers share their resources within their coalitions. By forming coalitions, the edge servers have more Central Processing Unit (CPU) power to complete the computation tasks. In the upper level, given the CPU power of the coalitions of edge servers, an all-pay auction is designed to incentivize the edge servers to participate in the CDC tasks. In the all-pay auction, the bids of the edge servers are represented by the allocation of their CPU power to the CDC tasks. The all-pay auction is designed to maximize the utility of the cloud server by determining the allocation of rewards to the winners. Simulation results show that the edge servers are incentivized to allocate more CPU power when multiple rewards are offered, i.e., there are multiple winners, instead of rewarding only the edge server with the largest CPU power allocation. Besides, the utility of the cloud server is maximized when it offers multiple homogeneous rewards, instead of heterogeneous rewards.
Mobile edge computing (MEC) has recently emerged as a promising technology to release the tension between computation-intensive applications and resource-limited mobile terminals (MTs). In this paper, we study the delay-optimal computation offloading in computation-constrained MEC systems. We consider the computation task queue at the MEC server due to its constrained computation capability. In this case, the task queue at the MT and that at the MEC server are strongly coupled in a cascade manner, which creates complex interdependencies and brings new technical challenges. We model the computation offloading problem as an infinite horizon average cost Markov decision process (MDP), and approximate it to a virtual continuous time system (VCTS) with reflections. Different to most of the existing works, we develop the dynamic instantaneous rate estimation for deriving the closed-form approximate priority functions in different scenarios. Based on the approximate priority functions, we propose a closed-form multi-level water-filling computation offloading solution to characterize the influence of not only the local queue state information (LQSI) but also the remote queue state information (RQSI). A extension is provided from single MT single MEC server scenarios to multiple MTs multiple MEC servers scenarios and several insights are derived. Finally, the simulation results show that the proposed scheme outperforms the conventional schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا