ترغب بنشر مسار تعليمي؟ اضغط هنا

EdgeBench: A Workflow-based Benchmark for Edge Computing

155   0   0.0 ( 0 )
 نشر من قبل Runyu Jin
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Edge computing has been developed to utilize multiple tiers of resources for privacy, cost and Quality of Service (QoS) reasons. Edge workloads have the characteristics of data-driven and latency-sensitive. Because of this, edge systems have developed to be both heterogeneous and distributed. The unique characteristics of edge workloads and edge systems have motivated EdgeBench, a workflow-based benchmark aims to provide the ability to explore the full design space of edge workloads and edge systems. EdgeBench is both customizable and representative. It allows users to customize the workflow logic of edge workloads, the data storage backends, and the distribution of the individual workflow stages to different computing tiers. To illustrate the usability of EdgeBench, we also implements two representative edge workflows, a video analytics workflow and an IoT hub workflow that represents two distinct but common edge workloads. Both workflows are evaluated using the workflow-level and function-level metrics reported by EdgeBench to illustrate both the performance bottlenecks of the edge systems and the edge workloads.


قيم البحث

اقرأ أيضاً

Fog/Edge computing model allows harnessing of resources in the proximity of the Internet of Things (IoT) devices to support various types of real-time IoT applications. However, due to the mobility of users and a wide range of IoT applications with d ifferent requirements, it is a challenging issue to satisfy these applications requirements. The execution of IoT applications exclusively on one fog/edge server may not be always feasible due to limited resources, while execution of IoT applications on different servers needs further collaboration among servers. Also, considering user mobility, some modules of each IoT application may require migration to other servers for execution, leading to service interruption and extra execution costs. In this article, we propose a new weighted cost model for hierarchical fog computing environments, in terms of the response time of IoT applications and energy consumption of IoT devices, to minimize the cost of running IoT applications and potential migrations. Besides, a distributed clustering technique is proposed to enable the collaborative execution of tasks, emitted from application modules, among servers. Also, we propose an application placement technique to minimize the overall cost of executing IoT applications on multiple servers in a distributed manner. Furthermore, a distributed migration management technique is proposed for the potential migration of applications modules to other remote servers as the users move along their path. Besides, failure recovery methods are embedded in the clustering, application placement, and migration management techniques to recover from unpredicted failures. The performance results show that our technique significantly improves its counterparts in terms of placement deployment time, average execution cost of tasks, total number of migrations, total number of interrupted tasks, and cumulative migration cost.
Internet of Things (IoT) has already proven to be the building block for next-generation Cyber-Physical Systems (CPSs). The considerable amount of data generated by the IoT devices needs latency-sensitive processing, which is not feasible by deployin g the respective applications in remote Cloud datacentres. Edge/Fog computing, a promising extension of Cloud at the IoT-proximate network, can meet such requirements for smart CPSs. However, the structural and operational differences of Edge/Fog infrastructure resist employing Cloud-based service regulations directly to these environments. As a result, many research works have been recently conducted, focusing on efficient application and resource management in Edge/Fog computing environments. Scalable Edge/Fog infrastructure is a must to validate these policies, which is also challenging to accommodate in the real-world due to high cost and implementation time. Considering simulation as a key to this constraint, various software has been developed that can imitate the physical behaviour of Edge/Fog computing environments. Nevertheless, the existing simulators often fail to support advanced service management features because of their monolithic architecture, lack of actual dataset, and limited scope for a periodic update. To overcome these issues, we have developed multiple simulation models for service migration, dynamic distributed cluster formation, and microservice orchestration for Edge/Fog computing in this work and integrated with the existing iFogSim simulation toolkit for launching it as iFogSim2. The performance of iFogSim2 and its built-in policies are evaluated using three use case scenarios and compared with the contemporary simulators and benchmark policies under different settings. Results indicate that the proposed solution outperform others in service management time, network usage, ram consumption, and simulation time.
283 - Chen Yuan , Yi Lu , Wei Feng 2019
Power flow analysis plays a fundamental and critical role in the energy management system (EMS). It is required to well accommodate large and complex power system. To achieve a high performance and accurate power flow analysis, a graph computing base d distributed power flow analysis approach is proposed in this paper. Firstly, a power system network is divided into multiple areas. Slack buses are selected for each area and, at each SCADA sampling period, the inter-area transmission line power flows are equivalently allocated as extra load injections to corresponding buses. Then, the system network is converted into multiple independent areas. In this way, the power flow analysis could be conducted in parallel for each area and the solved system states could be guaranteed without compromise of accuracy. Besides, for each area, graph computing based fast decoupled power flow (FDPF) is employed to quickly analyze system states. IEEE 118-bus system and MP 10790-bus system are employed to verify the results accuracy and present the promising computation performance of the proposed approach.
Intelligent task placement and management of tasks in large-scale fog platforms is challenging due to the highly volatile nature of modern workload applications and sensitive user requirements of low energy consumption and response time. Container or chestration platforms have emerged to alleviate this problem with prior art either using heuristics to quickly reach scheduling decisions or AI driven methods like reinforcement learning and evolutionary approaches to adapt to dynamic scenarios. The former often fail to quickly adapt in highly dynamic environments, whereas the latter have run-times that are slow enough to negatively impact response time. Therefore, there is a need for scheduling policies that are both reactive to work efficiently in volatile environments and have low scheduling overheads. To achieve this, we propose a Gradient Based Optimization Strategy using Back-propagation of gradients with respect to Input (GOBI). Further, we leverage the accuracy of predictive digital-twin models and simulation capabilities by developing a Coupled Simulation and Container Orchestration Framework (COSCO). Using this, we create a hybrid simulation driven decision approach, GOBI*, to optimize Quality of Service (QoS) parameters. Co-simulation and the back-propagation approaches allow these methods to adapt quickly in volatile environments. Experiments conducted using real-world data on fog applications using the GOBI and GOBI* methods, show a significant improvement in terms of energy consumption, response time, Service Level Objective and scheduling time by up to 15, 40, 4, and 82 percent respectively when compared to the state-of-the-art algorithms.
Workflow decision making is critical to performing many practical workflow applications. Scheduling in edge-cloud environments can address the high complexity of workflow applications, while decreasing the data transmission delay between the cloud an d end devices. However, due to the heterogeneous resources in edge-cloud environments and the complicated data dependencies between the tasks in a workflow, significant challenges for workflow scheduling remain, including the selection of an optimal tasks-servers solution from the possible numerous combinations. Existing studies are mainly done subject to rigorous conditions without fluctuations, ignoring the fact that workflow scheduling is typically present in uncertain environments. In this study, we focus on reducing the execution cost of workflow applications mainly caused by task computation and data transmission, while satisfying the workflow deadline in uncertain edge-cloud environments. The Triangular Fuzzy Numbers (TFNs) are adopted to represent the task processing time and data transferring time. A cost-driven fuzzy scheduling strategy based on an Adaptive Discrete Particle Swarm Optimization (ADPSO) algorithm is proposed, which employs the operators of Genetic Algorithm (GA). This strategy introduces the randomly two-point crossover operator, neighborhood mutation operator, and adaptive multipoint mutation operator of GA to effectively avoid converging on local optima. The experimental results show that our strategy can effectively reduce the workflow execution cost in uncertain edge-cloud environments, compared with other benchmark solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا