Do you want to publish a course? Click here

Cloud Scheduler: a resource manager for distributed compute clouds

147   0   0.0 ( 0 )
 Added by R. J. Sobie
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

The availability of Infrastructure-as-a-Service (IaaS) computing clouds gives researchers access to a large set of new resources for running complex scientific applications. However, exploiting cloud resources for large numbers of jobs requires significant effort and expertise. In order to make it simple and transparent for researchers to deploy their applications, we have developed a virtual machine resource manager (Cloud Scheduler) for distributed compute clouds. Cloud Scheduler boots and manages the user-customized virtual machines in response to a users job submission. We describe the motivation and design of the Cloud Scheduler and present results on its use on both science and commercial clouds.



rate research

Read More

In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I/O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the providers hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-spa
According to the pay-per-use model adopted in clouds, the more the resources consumed by an application running in a cloud computing environment, the greater the amount of money the owner of the corresponding application will be charged. Therefore, applying intelligent solutions to minimize the resource consumption is of great importance. Because centralized solutions are deemed unsuitable for large-distributed systems or large-scale applications, we propose a fully distributed algorithm (called DRA) to overcome the scalability issues. Specifically, DRA migrates the inter-communicating components of an application, such as processes or virtual machines, close to each other to minimize the total resource consumption. The migration decisions are made in a dynamic way and based only on local information. We prove that DRA achieves convergence and results always in the optimal solution.
In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed market-based resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.
In todays enterprise storage systems, supported data services such as snapshot delete or drive rebuild can cause tremendous performance interference if executed inline along with heavy foreground IO, often leading to missing SLOs (Service Level Objectives). Typical storage system applications such as web or VDI (Virtual Desktop Infrastructure) follow a repetitive high/low workload pattern that can be learned and forecasted. We propose a priority-based background scheduler that learns this repetitive pattern and allows storage systems to maintain peak performance and in turn meet service level objectives (SLOs) while supporting a number of data services. When foreground IO demand intensifies, system resources are dedicated to service foreground IO requests and any background processing that can be deferred are recorded to be processed in future idle cycles as long as forecast shows that storage pool has remaining capacity. The smart background scheduler adopts a resource partitioning model that allows both foreground and background IO to execute together as long as foreground IOs are not impacted where the scheduler harness any free cycle to clear background debt. Using traces from VDI application, we show how our technique surpasses a method that statically limit the deferred background debt and improve SLO violations from 54.6% when using a fixed background debt watermark to merely a 6.2% if dynamically set by our smart background scheduler.
P2P clusters like the Grid and PlanetLab enable in principle the same statistical multiplexing efficiency gains for computing as the Internet provides for networking. The key unsolved problem is resource allocation. Existing solutions are not economically efficient and require high latency to acquire resources. We designed and implemented Tycoon, a market based distributed resource allocation system based on an Auction Share scheduling algorithm. Preliminary results show that Tycoon achieves low latency and high fairness while providing incentives for truth-telling on the part of strategic users.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا