Do you want to publish a course? Click here

A Survey on Time-Sensitive Resource Allocation in the Cloud Continuum

85   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Artificial Intelligence (AI) and Internet of Things (IoT) applications are rapidly growing in todays world where they are continuously connected to the internet and process, store and exchange information among the devices and the environment. The cloud and edge platform is very crucial to these applications due to their inherent compute-intensive and resource-constrained nature. One of the foremost challenges in cloud and edge resource allocation is the efficient management of computation and communication resources to meet the performance and latency guarantees of the applications. The heterogeneity of cloud resources (processors, memory, storage, bandwidth), variable cost structure and unpredictable workload patterns make the design of resource allocation techniques complex. Numerous research studies have been carried out to address this intricate problem. In this paper, the current state-of-the-art resource allocation techniques for the cloud continuum, in particular those that consider time-sensitive applications, are reviewed. Furthermore, we present the key challenges in the resource allocation problem for the cloud continuum, a taxonomy to classify the existing literature and the potential research gaps.



rate research

Read More

In Wolke et al. [1] we compare the efficiency of different resource allocation strategies experimentally. We focused on dynamic environments where virtual machines need to be allocated and deallocated to servers over time. In this companion paper, we describe the simulation framework and how to run simulations to replicate experiments or run new experiments within the framework.
Vehicular cloud computing has emerged as a promising solution to fulfill users demands on processing computation-intensive applications in modern driving environments. Such applications are commonly represented by graphs consisting of components and edges. However, encouraging vehicles to share resources poses significant challenges owing to users selfishness. In this paper, an auction-based graph job allocation problem is studied in vehicular cloud-assisted networks considering resource reutilization. Our goal is to map each buyer (component) to a feasible seller (virtual machine) while maximizing the buyers utility-of-service, which concerns the execution time and commission cost. First, we formulate the auction-based graph job allocation as an integer programming (IP) problem. Then, a Vickrey-Clarke-Groves based payment rule is proposed which satisfies the desired economical properties, truthfulness and individual rationality. We face two challenges: 1) the above-mentioned IP problem is NP-hard; 2) one constraint associated with the IP problem poses addressing the subgraph isomorphism problem. Thus, obtaining the optimal solution is practically infeasible in large-scale networks. Motivated by which, we develop a structure-preserved matching algorithm by maximizing the utility-of-service-gain, and the corresponding payment rule which offers economical properties and low computation complexity. Extensive simulations demonstrate that the proposed algorithm outperforms the benchmark methods considering various problem sizes.
Distributed digital infrastructures for computation and analytics are now evolving towards an interconnected ecosystem allowing complex applications to be executed from IoT Edge devices to the HPC Cloud (aka the Computing Continuum, the Digital Continuum, or the Transcontinuum). Understanding end-to-end performance in such a complex continuum is challenging. This breaks down to reconciling many, typically contradicting application requirements and constraints with low-level infrastructure design choices. One important challenge is to accurately reproduce relevant behaviors of a given application workflow and representative settings of the physical infrastructure underlying this complex continuum. We introduce a rigorous methodology for such a process and validate it through E2Clab. It is the first platform to support the complete experimental cycle across the Computing Continuum: deployment, analysis, optimization. Preliminary results with real-life use cases show that E2Clab allows one to understand and improve performance, by correlating it to the parameter settings, the resource usage and the specifics of the underlying infrastructure.
110 - Kai Li , Yong Wang , Meilin Liu 2014
Cloud computing is a newly emerging distributed computing which is evolved from Grid computing. Task scheduling is the core research of cloud computing which studies how to allocate the tasks among the physical nodes so that the tasks can get a balanced allocation or each tasks execution cost decreases to the minimum or the overall system performance is optimal. Unlike the previous task slices sequential execution of an independent task in the model of which the target is processing time, we build a model that targets at the response time, in which the task slices are executed in parallel. Then we give its solution with a method based on an improved adjusting entropy function. At last, we design a new task scheduling algorithm. Experimental results show that the response time of our proposed algorithm is much lower than the game-theoretic algorithm and balanced scheduling algorithm and compared with the balanced scheduling algorithm, game-theoretic algorithm is not necessarily superior in parallel although its objective function value is better.
This volume represents the proceedings of the 2nd International Workshop on Dynamic Resource Allocation and Management in Embedded, High Performance and Cloud Computing (DREAMCloud 2016), co-located with HiPEAC 2016 on 19th January 2016 in Prague, Czech Republic.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا