ﻻ يوجد ملخص باللغة العربية
The rapid technological advances in the Internet of Things (IoT) allows the blueprint of Smart Cities to become feasible by integrating heterogeneous cloud/fog/edge computing paradigms to collaboratively provide variant smart services in our cities and communities. Thanks to attractive features like fine granularity and loose coupling, the microservices architecture has been proposed to provide scalable and extensible services in large scale distributed IoT systems. Recent studies have evaluated and analyzed the performance interference between microservices based on scenarios on the cloud computing environment. However, they are not holistic for IoT applications given the restriction of the edge device like computation consumption and network capacity. This paper investigates multiple microservice deployment policies on the edge computing platform. The microservices are developed as docker containers, and comprehensive experimental results demonstrate the performance and interference of microservices running on benchmark scenarios.
Mobile edge computing (MEC) has become a promising solution to utilize distributed computing resources for supporting computation-intensive vehicular applications in dynamic driving environments. To facilitate this paradigm, the onsite resource tradi
Edge computing has been developed to utilize multiple tiers of resources for privacy, cost and Quality of Service (QoS) reasons. Edge workloads have the characteristics of data-driven and latency-sensitive. Because of this, edge systems have develope
Serverless computing has rapidly grown following the launch of Amazons Lambda platform. Function-as-a-Service (FaaS) a key enabler of serverless computing allows an application to be decomposed into simple, standalone functions that are executed on a
Graph neural networks (GNN) have achieved state-of-the-art performance on various industrial tasks. However, the poor efficiency of GNN inference and frequent Out-Of-Memory (OOM) problem limit the successful application of GNN on edge computing platf
The computational demands for scientific applications are continuously increasing. The emergence of cloud computing has enabled on-demand resource allocation. However, relying solely on infrastructure as a service does not achieve the degree of flexi