Do you want to publish a course? Click here

Online Makespan Minimization: The Power of Restart

159   0   0.0 ( 0 )
 Added by Xiaowei Wu
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We consider the online makespan minimization problem on identical machines. Chen and Vestjens (ORL 1997) show that the largest processing time first (LPT) algorithm is 1.5-competitive. For the special case of two machines, Noga and Seiden (TCS 2001) introduce the SLEEPY algorithm that achieves a competitive ratio of $(5 - sqrt{5})/2 approx 1.382$, matching the lower bound by Chen and Vestjens (ORL 1997). Furthermore, Noga and Seiden note that in many applications one can kill a job and restart it later, and they leave an open problem whether algorithms with restart can obtain better competitive ratios. We resolve this long-standing open problem on the positive end. Our algorithm has a natural rule for killing a processing job: a newly-arrived job replaces the smallest processing job if 1) the new job is larger than other pending jobs, 2) the new job is much larger than the processing one, and 3) the processed portion is small relative to the size of the new job. With appropriate choice of parameters, we show that our algorithm improves the 1.5 competitive ratio for the general case, and the 1.382 competitive ratio for the two-machine case.

rate research

Read More

We study approximation algorithms for the problem of minimizing the makespan on a set of machines with uncertainty on the processing times of jobs. In the model we consider, which goes back to~cite{BertsimasS03}, once the schedule is defined an adversary can pick a scenario where deviation is added to some of the jobs processing times. Given only the maximal cardinality of these jobs, and the magnitude of potential deviation for each job, the goal is to optimize the worst-case scenario. We consider both the cases of identical and unrelated machines. Our main result is an EPTAS for the case of identical machines. We also provide a $3$-approximation algorithm and an inapproximability ratio of $2-epsilon$ for the case of unrelated machines
In the stochastic online vector balancing problem, vectors $v_1,v_2,ldots,v_T$ chosen independently from an arbitrary distribution in $mathbb{R}^n$ arrive one-by-one and must be immediately given a $pm$ sign. The goal is to keep the norm of the discrepancy vector, i.e., the signed prefix-sum, as small as possible for a given target norm. We consider some of the most well-known problems in discrepancy theory in the above online stochastic setting, and give algorithms that match the known offline bounds up to $mathsf{polylog}(nT)$ factors. This substantially generalizes and improves upon the previous results of Bansal, Jiang, Singla, and Sinha (STOC 20). In particular, for the Koml{o}s problem where $|v_t|_2leq 1$ for each $t$, our algorithm achieves $tilde{O}(1)$ discrepancy with high probability, improving upon the previous $tilde{O}(n^{3/2})$ bound. For Tusn{a}dys problem of minimizing the discrepancy of axis-aligned boxes, we obtain an $O(log^{d+4} T)$ bound for arbitrary distribution over points. Previous techniques only worked for product distributions and gave a weaker $O(log^{2d+1} T)$ bound. We also consider the Banaszczyk setting, where given a symmetric convex body $K$ with Gaussian measure at least $1/2$, our algorithm achieves $tilde{O}(1)$ discrepancy with respect to the norm given by $K$ for input distributions with sub-exponential tails. Our key idea is to introduce a potential that also enforces constraints on how the discrepancy vector evolves, allowing us to maintain certain anti-concentration properties. For the Banaszczyk setting, we further enhance this potential by combining it with ideas from generic chaining. Finally, we also extend these results to the setting of online multi-color discrepancy.
We study the online discrepancy minimization problem for vectors in $mathbb{R}^d$ in the oblivious setting where an adversary is allowed fix the vectors $x_1, x_2, ldots, x_n$ in arbitrary order ahead of time. We give an algorithm that maintains $O(sqrt{log(nd/delta)})$ discrepancy with probability $1-delta$, matching the lower bound given in [Bansal et al. 2020] up to an $O(sqrt{log log n})$ factor in the high-probability regime. We also provide results for the weighted and multi-col
Consider a unit interval $[0,1]$ in which $n$ points arrive one-by-one independently and uniformly at random. On arrival of a point, the problem is to immediately and irrevocably color it in ${+1,-1}$ while ensuring that every interval $[a,b] subseteq [0,1]$ is nearly-balanced. We define emph{discrepancy} as the largest imbalance of any interval during the entire process. If all the arriving points were known upfront then we can color them alternately to achieve a discrepancy of $1$. What is the minimum possible expected discrepancy when we color the points online? We show that the discrepancy of the above problem is sub-polynomial in $n$ and that no algorithm can achieve a constant discrepancy. This is a substantial improvement over the trivial random coloring that only gets an $widetilde{O}(sqrt n)$ discrepancy. We then obtain similar results for a natural generalization of this problem to $2$-dimensions where the points arrive uniformly at random in a unit square. This generalization allows us to improve recent results of Benade et al.cite{BenadeKPP-EC18} for the online envy minimization problem when the arrivals are stochastic.
This paper considers the online machine minimization problem, a basic real time scheduling problem. The setting for this problem consists of n jobs that arrive over time, where each job has a deadline by which it must be completed. The goal is to design an online scheduler that feasibly schedules the jobs on a nearly minimal number of machines. An algorithm is c-machine optimal if the algorithm will feasibly schedule a collection of jobs on cm machines if there exists a feasible schedule on m machines. For over two decades the best known result was a O(log P)-machine optimal algorithm, where P is the ratio of the maximum to minimum job size. In a recent breakthrough, a O(log m)-machine optimal algorithm was given. In this paper, we exponentially improve on this recent result by giving a O(log log m)-machine optimal algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا