ﻻ يوجد ملخص باللغة العربية
Small-scale clouds (SCs) often suffer from resource under-provisioning during peak demand, leading to inability to satisfy service level agreements (SLAs) and consequent loss of customers. One approach to address this problem is for a set of autonomous SCs to share resources among themselves in a cost-induced cooperative fashion, thereby increasing their individual capacities (when needed) without having to significantly invest in more resources. A central problem (in this context) is how to properly share resources (for a price) to achieve profitable service while maintaining customer SLAs. To address this problem, in this paper, we propose the SC-Share framework that utilizes two interacting models: (i) a stochastic performance model that estimates the achieved performance characteristics under given SLA requirements, and (ii) a market-based game-theoretic model that (as shown empirically) converges to efficient resource sharing decisions at market equilibrium. Our results include extensive evaluations that illustrate the utility of the proposed framework.
The recent emergence of the small cloud (SC), both in concept and in practice, has been driven mainly by issues related to service cost and complexity of commercial cloud providers (e.g., Amazon) employing massive data centers. However, the resource
The sixth generation (6G) network must provide performance superior to previous generations in order to meet the requirements of emerging services and applications, such as multi-gigabit transmission rate, even higher reliability, sub 1 millisecond l
Given the proximity of many wireless users and their diversity in consuming local resources (e.g., data-plans, computation and even energy resources), device-to-device (D2D) resource sharing is a promising approach towards realizing a sharing economy
The availability of Infrastructure-as-a-Service (IaaS) computing clouds gives researchers access to a large set of new resources for running complex scientific applications. However, exploiting cloud resources for large numbers of jobs requires signi
Partitioning and distributing deep neural networks (DNNs) across end-devices, edge resources and the cloud has a potential twofold advantage: preserving privacy of the input data, and reducing the ingress bandwidth demand beyond the edge. However, fo