ترغب بنشر مسار تعليمي؟ اضغط هنا

Hierarchy-Based Algorithms for Minimizing Makespan under Precedence and Communication Constraints

98   0   0.0 ( 0 )
 نشر من قبل Jakub Tarnawski
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the classic problem of scheduling jobs with precedence constraints on a set of identical machines to minimize the makespan objective function. Understanding the exact approximability of the problem when the number of machines is a constant is a well-known question in scheduling theory. Indeed, an outstanding open problem from the classic book of Garey and Johnson asks whether this problem is NP-hard even in the case of 3 machines and unit-length jobs. In a recent breakthrough, Levey and Rothvoss gave a $(1+epsilon)$-approximation algorithm, which runs in nearly quasi-polynomial time, for the case when job have unit lengths. However, a substantially more difficult case where jobs have arbitrary processing lengths has remained open. We make progress on this more general problem. We show that there exists a $(1+epsilon)$-approximation algorithm (with similar running time as that of Levey and Rothvoss) for the non-migratory setting: when every job has to be scheduled entirely on a single machine, but within a machine the job need not be scheduled during consecutive time steps. Further, we also show that our algorithmic framework generalizes to another classic scenario where, along with the precedence constraints, the jobs also have communication delay constraints. Both of these fundamental problems are highly relevant to the practice of datacenter scheduling.



قيم البحث

اقرأ أيضاً

MapReduce is a popular parallel computing paradigm for Big Data processing in clusters and data centers. It is observed that different job execution orders and MapReduce slot configurations for a MapReduce workload have significantly different perfor mance with regarding to the makespan, total completion time, system utilization and other performance metrics. There are quite a few algorithms on minimizing makespan of multiple MapReduce jobs. However, these algorithms are heuristic or suboptimal. The best known algorithm for minimizing the makespan is 3-approximation by applying Johnson rule. In this paper, we propose an approach called UAAS algorithm to meet the conditions of classical Johnson model. Then we can still use Johnson model for an optimal solution. We explain how to adapt to Johnson model and provide a few key features of our proposed method.
188 - Kook Jin Ahn , Sudipto Guha 2013
In this paper we consider graph algorithms in models of computation where the space usage (random accessible storage, in addition to the read only input) is sublinear in the number of edges $m$ and the access to input data is constrained. These quest ions arises in many natural settings, and in particular in the analysis of MapReduce or similar algorithms that model constrained parallelism with sublinear central processing. In SPAA 2011, Lattanzi etal. provided a $O(1)$ approximation of maximum matching using $O(p)$ rounds of iterative filtering via mapreduce and $O(n^{1+1/p})$ space of central processing for a graph with $n$ nodes and $m$ edges. We focus on weighted nonbipartite maximum matching in this paper. For any constant $p>1$, we provide an iterative sampling based algorithm for computing a $(1-epsilon)$-approximation of the weighted nonbipartite maximum matching that uses $O(p/epsilon)$ rounds of sampling, and $O(n^{1+1/p})$ space. The results extends to $b$-Matching with small changes. This paper combines adaptive sketching literature and fast primal-dual algorithms based on relaxed Dantzig-Wolfe decision procedures. Each round of sampling is implemented through linear sketches and executed in a single round of MapReduce. The paper also proves that nonstandard linear relaxations of a problem, in particular penalty based formulations, are helpful in mapreduce and similar settings in reducing the adaptive dependence of the iterations.
The noisy broadcast model was first studied in [Gallager, TranInf88] where an $n$-character input is distributed among $n$ processors, so that each processor receives one input bit. Computation proceeds in rounds, where in each round each processor b roadcasts a single character, and each reception is corrupted independently at random with some probability $p$. [Gallager, TranInf88] gave an algorithm for all processors to learn the input in $O(loglog n)$ rounds with high probability. Later, a matching lower bound of $Omega(loglog n)$ was given in [Goyal, Kindler, Saks; SICOMP08]. We study a relaxed version of this model where each reception is erased and replaced with a `? independently with probability $p$. In this relaxed model, we break past the lower bound of [Goyal, Kindler, Saks; SICOMP08] and obtain an $O(log^* n)$-round algorithm for all processors to learn the input with high probability. We also show an $O(1)$-round algorithm for the same problem when the alphabet size is $Omega(mathrm{poly}(n))$.
We consider communication problems in the setting of mobile agents deployed in an edge-weighted network. The assumption of the paper is that each agent has some energy that it can transfer to any other agent when they meet (together with the informat ion it holds). The paper deals with three communication problems: data delivery,convergecast and broadcast. These problems are posed for a centralized scheduler which has full knowledge of the instance. It is already known that, without energy exchange, all three problems are NP-complete even if the network is a line. Surprisingly, if we allow the agents to exchange energy, we show that all three problems are polynomially solvable on trees and have linear time algorithms on the line. On the other hand for general undirected and directed graphs we show that these problems, even if energy exchange is allowed, are still NP-complete.
We study approximation algorithms for the problem of minimizing the makespan on a set of machines with uncertainty on the processing times of jobs. In the model we consider, which goes back to~cite{BertsimasS03}, once the schedule is defined an adver sary can pick a scenario where deviation is added to some of the jobs processing times. Given only the maximal cardinality of these jobs, and the magnitude of potential deviation for each job, the goal is to optimize the worst-case scenario. We consider both the cases of identical and unrelated machines. Our main result is an EPTAS for the case of identical machines. We also provide a $3$-approximation algorithm and an inapproximability ratio of $2-epsilon$ for the case of unrelated machines
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا