ترغب بنشر مسار تعليمي؟ اضغط هنا

Market-Oriented Online Bi-Objective Service Scheduling for Pleasingly Parallel Jobs with Variable Resources in Cloud Environments

64   0   0.0 ( 0 )
 نشر من قبل Li Pan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study the market-oriented online bi-objective service scheduling problem for pleasingly parallel jobs with variable resources in cloud environments, from the perspective of SaaS (Software-as-as-Service) providers who provide job-execution services. The main process of scheduling SaaS services in clouds is: a SaaS provider purchases cloud instances from IaaS providers to schedule end users jobs and charges users accordingly. This problem has several particular features, such as the job-oriented end users, the pleasingly parallel jobs with soft deadline constraints, the online settings, and the variable numbers of resources. For maximizing both the revenue and the user satisfaction rate, we design an online algorithm for SaaS providers to optimally purchase IaaS instances and schedule pleasingly parallel jobs. The proposed algorithm can achieve competitive objectives in polynomial run-time. The theoretical analysis and simulations based on real-world Google job traces as well as synthetic datasets validate the effectiveness and efficiency of our algorithm.

قيم البحث

اقرأ أيضاً

A non-invasive, cloud-agnostic approach is demonstrated for extending existing cloud platforms to include checkpoint-restart capability. Most cloud platforms currently rely on each application to provide its own fault tolerance. A uniform mechanism w ithin the cloud itself serves two purposes: (a) direct support for long-running jobs, which would otherwise require a custom fault-tolerant mechanism for each application; and (b) the administrative capability to manage an over-subscribed cloud by temporarily swapping out jobs when higher priority jobs arrive. An advantage of this uniform approach is that it also supports parallel and distributed computations, over both TCP and InfiniBand, thus allowing traditional HPC applications to take advantage of an existing cloud infrastructure. Additionally, an integrated health-monitoring mechanism detects when long-running jobs either fail or incur exceptionally low performance, perhaps due to resource starvation, and proactively suspends the job. The cloud-agnostic feature is demonstrated by applying the implementation to two very different cloud platforms: Snooze and OpenStack. The use of a cloud-agnostic architecture also enables, for the first time, migration of applications from one cloud platform to another.
The global economic recession and the shrinking budget of IT projects have led to the need of development of integrated information systems at a lower cost. Today, the emerging phenomenon of cloud computing aims at transforming the traditional way of computing by providing both software applications and hardware resources as a service. With the rapid evolution of Information Communication Technology (ICT) governments, organizations and businesses are looking for solutions to improve their services and integrate their IT infrastructures. In recent years advanced technologies such as SOA and Cloud computing have been evolved to address integration problems. The Clouds enormous capacity with comparable low cost makes it an ideal platform for SOA deployment. This paper deals with the combined approach of Cloud and Service Oriented Architecture along with a Case Study and a review.
The Cloud infrastructure offers to end users a broad set of heterogenous computational resources using the pay-as-you-go model. These virtualized resources can be provisioned using different pricing models like the unreliable model where resources ar e provided at a fraction of the cost but with no guarantee for an uninterrupted processing. However, the enormous gamut of opportunities comes with a great caveat as resource management and scheduling decisions are increasingly complicated. Moreover, the presented uncertainty in optimally selecting resources has also a negatively impact on the quality of solutions delivered by scheduling algorithms. In this paper, we present a dynamic scheduling algorithm (i.e., the Uncertainty-Driven Scheduling - UDS algorithm) for the management of scientific workflows in Cloud. Our model minimizes both the makespan and the monetary cost by dynamically selecting reliable or unreliable virtualized resources. For covering the uncertainty in decision making, we adopt a Fuzzy Logic Controller (FLC) to derive the pricing model of the resources that will host every task. We evaluate the performance of the proposed algorithm using real workflow applications being tested under the assumption of different probabilities regarding the revocation of unreliable resources. Numerical results depict the performance of the proposed approach and a comparative assessment reveals the position of the paper in the relevant literature.
Kubernetes (k8s) has the potential to merge the distributed edge and the cloud but lacks a scheduling framework specifically for edge-cloud systems. Besides, the hierarchical distribution of heterogeneous resources and the complex dependencies among requests and resources make the modeling and scheduling of k8s-oriented edge-cloud systems particularly sophisticated. In this paper, we introduce KaiS, a learning-based scheduling framework for such edge-cloud systems to improve the long-term throughput rate of request processing. First, we design a coordinated multi-agent actor-critic algorithm to cater to decentralized request dispatch and dynamic dispatch spaces within the edge cluster. Second, for diverse system scales and structures, we use graph neural networks to embed system state information, and combine the embedding results with multiple policy networks to reduce the orchestration dimensionality by stepwise scheduling. Finally, we adopt a two-time-scale scheduling mechanism to harmonize request dispatch and service orchestration, and present the implementation design of deploying the above algorithms compatible with native k8s components. Experiments using real workload traces show that KaiS can successfully learn appropriate scheduling policies, irrespective of request arrival patterns and system scales. Moreover, KaiS can enhance the average system throughput rate by 14.3% while reducing scheduling cost by 34.7% compared to baselines.
Workflow decision making is critical to performing many practical workflow applications. Scheduling in edge-cloud environments can address the high complexity of workflow applications, while decreasing the data transmission delay between the cloud an d end devices. However, due to the heterogeneous resources in edge-cloud environments and the complicated data dependencies between the tasks in a workflow, significant challenges for workflow scheduling remain, including the selection of an optimal tasks-servers solution from the possible numerous combinations. Existing studies are mainly done subject to rigorous conditions without fluctuations, ignoring the fact that workflow scheduling is typically present in uncertain environments. In this study, we focus on reducing the execution cost of workflow applications mainly caused by task computation and data transmission, while satisfying the workflow deadline in uncertain edge-cloud environments. The Triangular Fuzzy Numbers (TFNs) are adopted to represent the task processing time and data transferring time. A cost-driven fuzzy scheduling strategy based on an Adaptive Discrete Particle Swarm Optimization (ADPSO) algorithm is proposed, which employs the operators of Genetic Algorithm (GA). This strategy introduces the randomly two-point crossover operator, neighborhood mutation operator, and adaptive multipoint mutation operator of GA to effectively avoid converging on local optima. The experimental results show that our strategy can effectively reduce the workflow execution cost in uncertain edge-cloud environments, compared with other benchmark solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا