Do you want to publish a course? Click here

Learnable and Instance-Robust Predictions for Online Matching, Flows and Load Balancing

83   0   0.0 ( 0 )
 Added by Chenyang Xu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We propose a new model for augmenting algorithms with predictions by requiring that they are formally learnable and instance robust. Learnability ensures that predictions can be efficiently constructed from a reasonable amount of past data. Instance robustness ensures that the prediction is robust to modest changes in the problem input, where the measure of the change may be problem specific. Instance robustness insists on a smooth degradation in performance as a function of the change. Ideally, the performance is never worse than worst-case bounds. This also allows predictions to be objectively compared. We design online algorithms with predictions for a network flow allocation problem and restricted assignment makespan minimization. For both problems, two key properties are established: high quality predictions can be learned from a small sample of prior instances and these predictions are robust to errors that smoothly degrade as the underlying problem instance changes.

rate research

Read More

In the load balancing problem, introduced by Graham in the 1960s (SIAM J. of Appl. Math. 1966, 1969), jobs arriving online have to be assigned to machines so to minimize an objective defined on machine loads. A long line of work has addressed this problem for both the makespan norm and arbitrary $ell_q$-norms of machine loads. Recent literature (e.g., Azar et al., STOC 2013; Im et al., FOCS 2015) has further expanded the scope of this problem to vector loads, to capture jobs with multi-dimensional resource requirements in applications such as data centers. In this paper, we completely resolve the job scheduling problem for both scalar and vector jobs on related machines, i.e., where each machine has a given speed and the time taken to process a job is inversely proportional to the speed of the machine it is assigned on. We show the following results. For scalar scheduling, we give a constant competitive algorithm for optimizing any $ell_q$-norm for related machines. The only previously known result was for the makespan norm. For vector scheduling, there are two natural variants for vector scheduling, depending on whether the speed of a machine is dimension-dependent or not. We show a sharp contrast between these two variants, proving that they are respectively equivalent to unrelated machines and identical machines for the makespan norm. We also extend these results to arbitrary $ell_q$-norms of the machine loads. No previous results were known for vector scheduling on related machines.
In this paper we consider neighborhood load balancing in the context of selfish clients. We assume that a network of n processors and m tasks is given. The processors may have different speeds and the tasks may have different weights. Every task is controlled by a selfish user. The objective of the user is to allocate his/her task to a processor with minimum load. We revisit the concurrent probabilistic protocol introduced in [6], which works in sequential rounds. In each round every task is allowed to query the load of one randomly chosen neighboring processor. If that load is smaller the task will migrate to that processor with a suitably chosen probability. Using techniques from spectral graph theory we obtain upper bounds on the expected convergence time towards approximate and exact Nash equilibria that are significantly better than the previous results in [6]. We show results for uniform tasks on non-uniform processors and the general case where the tasks have different weights and the machines have speeds. To the best of our knowledge, these are the first results for this general setting.
87 - Bo Li , Minming Li , Xiaowei Wu 2019
In the online load balancing problem on related machines, we have a set of jobs (with different sizes) arriving online, and we need to assign each job to a machine immediately upon its arrival, so as to minimize the makespan, i.e., the maximum completion time. In classic mechanism design problems, we assume that the jobs are controlled by selfish agents, with the sizes being their private information. Each job (agent) aims at minimizing its own cost, which is its completion time plus the payment charged by the mechanism. Truthful mechanisms guaranteeing that every job minimizes its cost by reporting its true size have been well-studied [Aspnes et al. JACM 1997, Feldman et al. EC 2017]. In this paper, we study truthful online load balancing mechanisms that are well-behaved [Epstein et al., MOR 2016]. Well-behavior is important as it guarantees fairness between machines, and implies truthfulness in some cases when machines are controlled by selfish agents. Unfortunately, existing truthful online load balancing mechanisms are not well-behaved. We first show that to guarantee producing a well-behaved schedule, any online algorithm (even non-truthful) has a competitive ratio at least $Omega(sqrt{m})$, where m is the number of machines. Then we propose a mechanism that guarantees truthfulness of the online jobs, and produces a schedule that is almost well-behaved. We show that our algorithm has a competitive ratio of $O(log m)$. Moreover, for the case when the sizes of online jobs are bounded, the competitive ratio of our algorithm improves to $O(1)$. Interestingly, we show several cases for which our mechanism is actually truthful against selfish machines.
In bipartite matching problems, vertices on one side of a bipartite graph are paired with those on the other. In its online variant, one side of the graph is available offline, while the vertices on the other side arrive online. When a vertex arrives, an irrevocable and immediate decision should be made by the algorithm; either match it to an available vertex or drop it. Examples of such problems include matching workers to firms, advertisers to keywords, organs to patients, and so on. Much of the literature focuses on maximizing the total relevance---modeled via total weight---of the matching. However, in many real-world problems, it is also important to consider contributions of diversity: hiring a diverse pool of candidates, displaying a relevant but diverse set of ads, and so on. In this paper, we propose the Online Submodular Bipartite Matching (osbm) problem, where the goal is to maximize a submodular function $f$ over the set of matched edges. This objective is general enough to capture the notion of both diversity (emph{e.g.,} a weighted coverage function) and relevance (emph{e.g.,} the traditional linear function)---as well as many other natural objective functions occurring in practice (emph{e.g.,} limited total budget in advertising settings). We propose novel algorithms that have provable guarantees and are essentially optimal when restricted to various special cases. We also run experiments on real-world and synthetic datasets to validate our algorithms.
Set function optimization is essential in AI and machine learning. We focus on a subadditive set function that generalizes submodularity, and examine the subadditivity of non-submodular functions. We also deal with a minimax subadditive load balancing problem, and present a modularization-minimization algorithm that theoretically guarantees a worst-case approximation factor. In addition, we give a lower bound computation technique for the problem. We apply these methods to the multi-robot routing problem for an empirical performance evaluation.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا