ترغب بنشر مسار تعليمي؟ اضغط هنا

Resource Scheduling in Edge Computing: A Survey

124   0   0.0 ( 0 )
 نشر من قبل Shihong Hu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions located in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.



قيم البحث

اقرأ أيضاً

Recently, unmanned aerial vehicles (UAVs) assisted multi-access edge computing (MEC) systems emerged as a promising solution for providing computation services to mobile users outside of terrestrial infrastructure coverage. As each UAV operates indep endently, however, it is challenging to meet the computation demands of the mobile users due to the limited computing capacity at the UAVs MEC server as well as the UAVs energy constraint. Therefore, collaboration among UAVs is needed. In this paper, a collaborative multi-UAV-assisted MEC system integrated with a MEC-enabled terrestrial base station (BS) is proposed. Then, the problem of minimizing the total latency experienced by the mobile users in the proposed system is studied by optimizing the offloading decision as well as the allocation of communication and computing resources while satisfying the energy constraints of both mobile users and UAVs. The proposed problem is shown to be a non-convex, mixed-integer nonlinear problem (MINLP) that is intractable. Therefore, the formulated problem is decomposed into three subproblems: i) users tasks offloading decision problem, ii) communication resource allocation problem and iii) UAV-assisted MEC decision problem. Then, the Lagrangian relaxation and alternating direction method of multipliers (ADMM) methods are applied to solve the decomposed problems, alternatively. Simulation results show that the proposed approach reduces the average latency by up to 40.7% and 4.3% compared to the greedy and exhaustive search methods.
In mobile edge computing (MEC), one of the important challenges is how much resources of which mobile edge server (MES) should be allocated to which user equipment (UE). The existing resource allocation schemes only consider CPU as the requested reso urce and assume utility for MESs as either a random variable or dependent on the requested CPU only. This paper presents a novel comprehensive utility function for resource allocation in MEC. The utility function considers the heterogeneous nature of applications that a UE offloads to MES. The proposed utility function considers all important parameters, including CPU, RAM, hard disk space, required time, and distance, to calculate a more realistic utility value for MESs. Moreover, we improve upon some general algorithms, used for resource allocation in MEC and cloud computing, by considering our proposed utility function. We name the improv
This paper studies edge caching in fog computing networks, where a capacity-aware edge caching framework is proposed by considering both the limited fog cache capacity and the connectivity capacity of base stations (BSs). By allowing cooperation betw een fog nodes and cloud data center, the average-download-time (ADT) minimization problem is formulated as a multi-class processor queuing process. We prove the convexity of the formulated problem and propose an Alternating Direction Method of Multipliers (ADMM)-based algorithm that can achieve the minimum ADT and converge much faster than existing algorithms. Simulation results demonstrate that the allocation of fog cache capacity and connectivity capacity of BSs needs to be balanced according to the network status. While the maximization of the edge-cache-hit-ratio (ECHR) by utilizing all available fog cache capacity is helpful when the BS connectivity capacity is sufficient, it is preferable to keep a lower ECHR and allocate more traffic to the cloud when the BS connectivity capacity is deficient.
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications. Traditional cloudbased Mac hine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL
Age of Information (AoI), defined as the time elapsed since the generation of the latest received update, is a promising performance metric to measure data freshness for real-time status monitoring. In many applications, status information needs to b e extracted through computing, which can be processed at an edge server enabled by mobile edge computing (MEC). In this paper, we aim to minimize the average AoI within a given deadline by jointly scheduling the transmissions and computations of a series of update packets with deterministic transmission and computing times. The main analytical results are summarized as follows. Firstly, the minimum deadline to guarantee the successful transmission and computing of all packets is given. Secondly, a emph{no-wait computing} policy which intuitively attains the minimum AoI is introduced, and the feasibility condition of the policy is derived. Finally, a closed-form optimal scheduling policy is obtained on the condition that the deadline exceeds a certain threshold. The behavior of the optimal transmission and computing policy is illustrated by numerical results with different values of the deadline, which validates the analytical results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا