ترغب بنشر مسار تعليمي؟ اضغط هنا

Improved Analysis of Deterministic Load-Balancing Schemes

184   0   0.0 ( 0 )
 نشر من قبل Przemyslaw Uznanski
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of deterministic load balancing of tokens in the discrete model. A set of $n$ processors is connected into a $d$-regular undirected network. In every time step, each processor exchanges some of its tokens with each of its neighbors in the network. The goal is to minimize the discrepancy between the number of tokens on the most-loaded and the least-loaded processor as quickly as possible. Rabani et al. (1998) present a general technique for the analysis of a wide class of discrete load balancing algorithms. Their approach is to characterize the deviation between the actual loads of a discrete balancing algorithm with the distribution generated by a related Markov chain. The Markov chain can also be regarded as the underlying model of a continuous diffusion algorithm. Rabani et al. showed that after time $T = O(log (Kn)/mu)$, any algorithm of their class achieves a discrepancy of $O(dlog n/mu)$, where $mu$ is the spectral gap of the transition matrix of the graph, and $K$ is the initial load discrepancy in the system. In this work we identify some natural additional conditions on deterministic balancing algorithms, resulting in a class of algorithms reaching a smaller discrepancy. This class contains well-known algorithms, eg., the Rotor-Router. Specifically, we introduce the notion of cumulatively fair load-balancing algorithms where in any interval of consecutive time steps, the total number of tokens sent out over an edge by a node is the same (up to constants) for all adjacent edges. We prove that algorithms which are cumulatively fair and where every node retains a sufficient part of its load in each step, achieve a discrepancy of $O(min{dsqrt{log n/mu},dsqrt{n}})$ in time $O(T)$. We also show that in general neither of these assumptions may be omitted without increasing discrepancy. We then show by a combinatorial potential reduction argument that any cumulatively fair scheme satisfying some additional assumptions achieves a discrepancy of $O(d)$ almost as quickly as the continuous diffusion process. This positive result applies to some of the simplest and most natural discrete load balancing schemes.



قيم البحث

اقرأ أيضاً

Set function optimization is essential in AI and machine learning. We focus on a subadditive set function that generalizes submodularity, and examine the subadditivity of non-submodular functions. We also deal with a minimax subadditive load balancin g problem, and present a modularization-minimization algorithm that theoretically guarantees a worst-case approximation factor. In addition, we give a lower bound computation technique for the problem. We apply these methods to the multi-robot routing problem for an empirical performance evaluation.
In the load balancing problem, introduced by Graham in the 1960s (SIAM J. of Appl. Math. 1966, 1969), jobs arriving online have to be assigned to machines so to minimize an objective defined on machine loads. A long line of work has addressed this pr oblem for both the makespan norm and arbitrary $ell_q$-norms of machine loads. Recent literature (e.g., Azar et al., STOC 2013; Im et al., FOCS 2015) has further expanded the scope of this problem to vector loads, to capture jobs with multi-dimensional resource requirements in applications such as data centers. In this paper, we completely resolve the job scheduling problem for both scalar and vector jobs on related machines, i.e., where each machine has a given speed and the time taken to process a job is inversely proportional to the speed of the machine it is assigned on. We show the following results. For scalar scheduling, we give a constant competitive algorithm for optimizing any $ell_q$-norm for related machines. The only previously known result was for the makespan norm. For vector scheduling, there are two natural variants for vector scheduling, depending on whether the speed of a machine is dimension-dependent or not. We show a sharp contrast between these two variants, proving that they are respectively equivalent to unrelated machines and identical machines for the makespan norm. We also extend these results to arbitrary $ell_q$-norms of the machine loads. No previous results were known for vector scheduling on related machines.
87 - Bo Li , Minming Li , Xiaowei Wu 2019
In the online load balancing problem on related machines, we have a set of jobs (with different sizes) arriving online, and we need to assign each job to a machine immediately upon its arrival, so as to minimize the makespan, i.e., the maximum comple tion time. In classic mechanism design problems, we assume that the jobs are controlled by selfish agents, with the sizes being their private information. Each job (agent) aims at minimizing its own cost, which is its completion time plus the payment charged by the mechanism. Truthful mechanisms guaranteeing that every job minimizes its cost by reporting its true size have been well-studied [Aspnes et al. JACM 1997, Feldman et al. EC 2017]. In this paper, we study truthful online load balancing mechanisms that are well-behaved [Epstein et al., MOR 2016]. Well-behavior is important as it guarantees fairness between machines, and implies truthfulness in some cases when machines are controlled by selfish agents. Unfortunately, existing truthful online load balancing mechanisms are not well-behaved. We first show that to guarantee producing a well-behaved schedule, any online algorithm (even non-truthful) has a competitive ratio at least $Omega(sqrt{m})$, where m is the number of machines. Then we propose a mechanism that guarantees truthfulness of the online jobs, and produces a schedule that is almost well-behaved. We show that our algorithm has a competitive ratio of $O(log m)$. Moreover, for the case when the sizes of online jobs are bounded, the competitive ratio of our algorithm improves to $O(1)$. Interestingly, we show several cases for which our mechanism is actually truthful against selfish machines.
Network decomposition is a central tool in distributed graph algorithms. We present two improvements on the state of the art for network decomposition, which thus lead to improvements in the (deterministic and randomized) complexity of several well-s tudied graph problems. - We provide a deterministic distributed network decomposition algorithm with $O(log^5 n)$ round complexity, using $O(log n)$-bit messages. This improves on the $O(log^7 n)$-round algorithm of Rozhov{n} and Ghaffari [STOC20], which used large messages, and their $O(log^8 n)$-round algorithm with $O(log n)$-bit messages. This directly leads to similar improvements for a wide range of deterministic and randomized distributed algorithms, whose solution relies on network decomposition, including the general distributed derandomization of Ghaffari, Kuhn, and Harris [FOCS18]. - One drawback of the algorithm of Rozhov{n} and Ghaffari, in the $mathsf{CONGEST}$ model, was its dependence on the length of the identifiers. Because of this, for instance, the algorithm could not be used in the shattering framework in the $mathsf{CONGEST}$ model. Thus, the state of the art randomized complexity of several problems in this model remained with an additive $2^{O(sqrt{loglog n})}$ term, which was a clear leftover of the older network decomposition complexity [Panconesi and Srinivasan STOC92]. We present a modified version that remedies this, constructing a decomposition whose quality does not depend on the identifiers, and thus improves the randomized round complexity for various problems.
We give the first polynomial-time approximation scheme (PTAS) for the stochastic load balancing problem when the job sizes follow Poisson distributions. This improves upon the 2-approximation algorithm due to Goel and Indyk (FOCS99). Moreover, our ap proximation scheme is an efficient PTAS that has a running time double exponential in $1/epsilon$ but nearly-linear in $n$, where $n$ is the number of jobs and $epsilon$ is the target error. Previously, a PTAS (not efficient) was only known for jobs that obey exponential distributions (Goel and Indyk, FOCS99). Our algorithm relies on several probabilistic ingredients including some (seemingly) new results on scaling and the so-called focusing effect of maximum of Poisson random variables which might be of independent interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا