Do you want to publish a course? Click here

An Experimental Study on Microservices based Edge Computing Platforms

89   0   0.0 ( 0 )
 Added by Yu Chen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The rapid technological advances in the Internet of Things (IoT) allows the blueprint of Smart Cities to become feasible by integrating heterogeneous cloud/fog/edge computing paradigms to collaboratively provide variant smart services in our cities and communities. Thanks to attractive features like fine granularity and loose coupling, the microservices architecture has been proposed to provide scalable and extensible services in large scale distributed IoT systems. Recent studies have evaluated and analyzed the performance interference between microservices based on scenarios on the cloud computing environment. However, they are not holistic for IoT applications given the restriction of the edge device like computation consumption and network capacity. This paper investigates multiple microservice deployment policies on the edge computing platform. The microservices are developed as docker containers, and comprehensive experimental results demonstrate the performance and interference of microservices running on benchmark scenarios.



rate research

Read More

Mobile edge computing (MEC) has become a promising solution to utilize distributed computing resources for supporting computation-intensive vehicular applications in dynamic driving environments. To facilitate this paradigm, the onsite resource trading serves as a critical enabler. However, dynamic communications and resource conditions could lead unpredictable trading latency, trading failure, and unfair pricing to the conventional resource trading process. To overcome these challenges, we introduce a novel futures-based resource trading approach in edge computing-enabled internet of vehicles (IoV), where a forward contract is used to facilitate resource trading related negotiations between an MEC server (seller) and a vehicle (buyer) in a given future term. Through estimating the historical statistics of future resource supply and network condition, we formulate the futures-based resource trading as the optimization problem aiming to maximize the sellers and the buyers expected utility, while applying risk evaluations to relieve possible losses incurred by the uncertainties in the system. To tackle this problem, we propose an efficient bilateral negotiation approach which facilitates the participants reaching a consensus. Extensive simulations demonstrate that the proposed futures-based resource trading brings considerable utilities to both participants, while significantly outperforming the baseline methods on critical factors, e.g., trading failures and fairness, negotiation latency and cost.
Edge computing has been developed to utilize multiple tiers of resources for privacy, cost and Quality of Service (QoS) reasons. Edge workloads have the characteristics of data-driven and latency-sensitive. Because of this, edge systems have developed to be both heterogeneous and distributed. The unique characteristics of edge workloads and edge systems have motivated EdgeBench, a workflow-based benchmark aims to provide the ability to explore the full design space of edge workloads and edge systems. EdgeBench is both customizable and representative. It allows users to customize the workflow logic of edge workloads, the data storage backends, and the distribution of the individual workflow stages to different computing tiers. To illustrate the usability of EdgeBench, we also implements two representative edge workflows, a video analytics workflow and an IoT hub workflow that represents two distinct but common edge workloads. Both workflows are evaluated using the workflow-level and function-level metrics reported by EdgeBench to illustrate both the performance bottlenecks of the edge systems and the edge workloads.
Serverless computing has rapidly grown following the launch of Amazons Lambda platform. Function-as-a-Service (FaaS) a key enabler of serverless computing allows an application to be decomposed into simple, standalone functions that are executed on a FaaS platform. The FaaS platform is responsible for deploying and facilitating resources to the functions. Several of todays cloud applications spread over heterogeneous connected computing resources and are highly dynamic in their structure and resource requirements. However, FaaS platforms are limited to homogeneous clusters and homogeneous functions and do not account for the data access behavior of functions before scheduling. We introduce an extension of FaaS to heterogeneous clusters and to support heterogeneous functions through a network of distributed heterogeneous target platforms called Function Delivery Network (FDN). A target platform is a combination of a cluster of homogeneous nodes and a FaaS platform on top of it. FDN provides Function-Delivery-as-a-Service (FDaaS), delivering the function to the right target platform. We showcase the opportunities such as varied target platforms characteristics, possibility of collaborative execution between multiple target platforms, and localization of data that the FDN offers in fulfilling two objectives: Service Level Objective (SLO) requirements and energy efficiency when scheduling functions by evaluating over five distributed target platforms using the FDNInspector, a tool developed by us for benchmarking distributed target platforms. Scheduling functions on an edge target platform in our evaluation reduced the overall energy consumption by 17x without violating the SLO requirements in comparison to scheduling on a high-end target platform.
105 - Ao Zhou , Jianlei Yang , Yeqi Gao 2021
Graph neural networks (GNN) have achieved state-of-the-art performance on various industrial tasks. However, the poor efficiency of GNN inference and frequent Out-Of-Memory (OOM) problem limit the successful application of GNN on edge computing platforms. To tackle these problems, a feature decomposition approach is proposed for memory efficiency optimization of GNN inference. The proposed approach could achieve outstanding optimization on various GNN models, covering a wide range of datasets, which speeds up the inference by up to 3x. Furthermore, the proposed feature decomposition could significantly reduce the peak memory usage (up to 5x in memory efficiency improvement) and mitigate OOM problems during GNN inference.
The computational demands for scientific applications are continuously increasing. The emergence of cloud computing has enabled on-demand resource allocation. However, relying solely on infrastructure as a service does not achieve the degree of flexibility required by the scientific community. Here we present a microservice-oriented methodology, where scientific applications run in a distributed orchestration platform as software containers, referred to as on-demand, virtual research environments. The methodology is vendor agnostic and we provide an open source implementation that supports the major cloud providers, offering scalable management of scientific pipelines. We demonstrate applicability and scalability of our methodology in life science applications, but the methodology is general and can be applied to other scientific domains.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا