No Arabic abstract
This paper focuses on the contention resolution problem on a shared communication channel that does not support collision detection. A shared communication channel is a multiple access channel, which consists of a sequence of synchronized time slots. Players on the channel may attempt to broadcast a packet (message) in any time slot. A players broadcast succeeds if no other player broadcasts during that slot. If two or more players broadcast in the same time slot, then the broadcasts collide and both broadcasts fail. The lack of collision detection means that a player monitoring the channel cannot differentiate between the case of two or more players broadcasting in the same slot (a collision) and zero players broadcasting. In the contention-resolution problem, players arrive on the channel over time, and each player has one packet to transmit. The goal is to coordinate the players so that each player is able to successfully transmit its packet within reasonable time. However, the players can only communicate via the shared channel by choosing to either broadcast or not. A contention-resolution protocol is measured in terms of its throughput (channel utilization). Previous work on contention resolution that achieved constant throughput assumed that either players could detect collisions, or the players arrival pattern is generated by a memoryless (non-adversarial) process. The foundational question answered by this paper is whether collision detection is a luxury or necessity when the objective is to achieve constant throughput. We show that even without collision detection, one can solve contention resolution, achieving constant throughput, with high probability.
We give a 1.488-approximation for the classic scheduling problem of minimizing total weighted completion time on unrelated machines. This is a considerable improvement on the recent breakthrough of $(1.5 - 10^{-7})$-approximation (STOC 2016, Bansal-Srinivasan-Svensson) and the follow-up result of $(1.5 - 1/6000)$-approximation (FOCS 2017, Li). Bansal et al. introduced a novel rounding scheme yielding strong negative correlations for the first time and applied it to the scheduling problem to obtain their breakthrough, which resolved the open problem if one can beat out the long-standing $1.5$-approximation barrier based on independent rounding. Our key technical contribution is in achieving significantly stronger negative correlations via iterative fair contention resolution, which is of independent interest. Previously, Bansal et al. obtained strong negative correlations via a variant of pipage type rounding and Li used it as a black box.
Matching is one of the most fundamental and broadly applicable problems across many domains. In these diverse real-world applications, there is often a degree of uncertainty in the input which has led to the study of stochastic matching models. Here, each edge in the graph has a known, independent probability of existing derived from some prediction. Algorithms must probe edges to determine existence and match them irrevocably if they exist. Further, each vertex may have a patience constraint denoting how many of its neighboring edges can be probed. We present new ordered contention resolution schemes yielding improved approximation guarantees for some of the foundational problems studied in this area. For stochastic matching with patience constraints in general graphs, we provide a 0.382-approximate algorithm, significantly improving over the previous best 0.31-approximation of Baveja et al. (2018). When the vertices do not have patience constraints, we describe a 0.432-approximate random order probing algorithm with several corollaries such as an improved guarantee for the Prophet Secretary problem under Edge Arrivals. Finally, for the special case of bipartite graphs with unit patience constraints on one of the partitions, we show a 0.632-approximate algorithm that improves on the recent $1/3$-guarantee of Hikima et al. (2021).
In this paper, we consider contention resolution algorithms that are augmented with predictions about the network. We begin by studying the natural setup in which the algorithm is provided a distribution defined over the possible network sizes that predicts the likelihood of each size occurring. The goal is to leverage the predictive power of this distribution to improve on worst-case time complexity bounds. Using a novel connection between contention resolution and information theory, we prove lower bounds on the expected time complexity with respect to the Shannon entropy of the corresponding network size random variable, for both the collision detection and no collision detection assumptions. We then analyze upper bounds for these settings, assuming now that the distribution provided as input might differ from the actual distribution generating network sizes. We express their performance with respect to both entropy and the statistical divergence between the two distributions -- allowing us to quantify the cost of poor predictions. Finally, we turn our attention to the related perfect advice setting, parameterized with a length $bgeq 0$, in which all active processes in a given execution are provided the best possible $b$ bits of information about their network. We provide tight bounds on the speed-up possible with respect to $b$ for deterministic and randomized algorithms, with and without collision detection. These bounds provide a fundamental limit on the maximum power that can be provided by any predictive model with a bounded output size.
We present a simple deterministic distributed algorithm that computes a $(Delta+1)$-vertex coloring in $O(log^2 Delta cdot log n)$ rounds. The algorithm can be implemented with $O(log n)$-bit messages. The algorithm can also be extended to the more general $(degree+1)$-list coloring problem. Obtaining a polylogarithmic-time deterministic algorithm for $(Delta+1)$-vertex coloring had remained a central open question in the area of distributed graph algorithms since the 1980s, until a recent network decomposition algorithm of Rozhov{n} and Ghaffari [STOC20]. The current state of the art is based on an improved variant of their decomposition, which leads to an $O(log^5 n)$-round algorithm for $(Delta+1)$-vertex coloring. Our coloring algorithm is completely different and considerably simpler and faster. It solves the coloring problem in a direct way, without using network decomposition, by gradually rounding a certain fractional color assignment until reaching an integral color assignments. Moreover, via the approach of Chang, Li, and Pettie [STOC18], this improved deterministic algorithm also leads to an improvement in the complexity of randomized algorithms for $(Delta+1)$-coloring, now reaching the bound of $O(log^3log n)$ rounds. As a further application, we also provide faster deterministic distributed algorithms for the following variants of the vertex coloring problem. In graphs of arboricity $a$, we show that a $(2+epsilon)a$-vertex coloring can be computed in $O(log^3 acdotlog n)$ rounds. We also show that for $Deltageq 3$, a $Delta$-coloring of a $Delta$-colorable graph $G$ can be computed in $O(log^2 Deltacdotlog^2 n)$ rounds.
We present improved distributed algorithms for triangle detection and its variants in the CONGEST model. We show that Triangle Detection, Counting, and Enumeration can be solved in $tilde{O}(n^{1/2})$ rounds. In contrast, the previous state-of-the-art bounds for Triangle Detection and Enumeration were $tilde{O}(n^{2/3})$ and $tilde{O}(n^{3/4})$, respectively, due to Izumi and LeGall (PODC 2017). The main technical novelty in this work is a distributed graph partitioning algorithm. We show that in $tilde{O}(n^{1-delta})$ rounds we can partition the edge set of the network $G=(V,E)$ into three parts $E=E_mcup E_scup E_r$ such that (a) Each connected component induced by $E_m$ has minimum degree $Omega(n^delta)$ and conductance $Omega(1/text{poly} log(n))$. As a consequence the mixing time of a random walk within the component is $O(text{poly} log(n))$. (b) The subgraph induced by $E_s$ has arboricity at most $n^{delta}$. (c) $|E_r| leq |E|/6$. All of our algorithms are based on the following generic framework, which we believe is of interest beyond this work. Roughly, we deal with the set $E_s$ by an algorithm that is efficient for low-arboricity graphs, and deal with the set $E_r$ using recursive calls. For each connected component induced by $E_m$, we are able to simulate congested clique algorithms with small overhead by applying a routing algorithm due to Ghaffari, Kuhn, and Su (PODC 2017) for high conductance graphs.