ترغب بنشر مسار تعليمي؟ اضغط هنا

Local Conflict Coloring Revisited: Linial for Lists

139   0   0.0 ( 0 )
 نشر من قبل Yannic Maus
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Linials famous color reduction algorithm reduces a given $m$-coloring of a graph with maximum degree $Delta$ to a $O(Delta^2log m)$-coloring, in a single round in the LOCAL model. We show a similar result when nodes are restricted to choose their color from a list of allowed colors: given an $m$-coloring in a directed graph of maximum outdegree $beta$, if every node has a list of size $Omega(beta^2 (log beta+loglog m + log log |mathcal{C}|))$ from a color space $mathcal{C}$ then they can select a color in two rounds in the LOCAL model. Moreover, the communication of a node essentially consists of sending its list to the neighbors. This is obtained as part of a framework that also contains Linials color reduction (with an alternative proof) as a special case. Our result also leads to a defective list coloring algorithm. As a corollary, we improve the state-of-the-art truly local $(deg+1)$-list coloring algorithm from Barenboim et al. [PODC18] by slightly reducing the runtime to $O(sqrt{DeltalogDelta})+log^* n$ and significantly reducing the message size (from huge to roughly $Delta$). Our techniques are inspired by the local conflict coloring framework of Fraigniaud et al. [FOCS16].



قيم البحث

اقرأ أيضاً

We present a randomized distributed algorithm that computes a $Delta$-coloring in any non-complete graph with maximum degree $Delta geq 4$ in $O(log Delta) + 2^{O(sqrt{loglog n})}$ rounds, as well as a randomized algorithm that computes a $Delta$-col oring in $O((log log n)^2)$ rounds when $Delta in [3, O(1)]$. Both these algorithms improve on an $O(log^3 n/log Delta)$-round algorithm of Panconesi and Srinivasan~[STOC1993], which has remained the state of the art for the past 25 years. Moreover, the latter algorithm gets (exponentially) closer to an $Omega(loglog n)$ round lower bound of Brandt et al.~[STOC16].
104 - Yannic Maus 2021
In this paper we present a deterministic CONGEST algorithm to compute an $O(kDelta)$-vertex coloring in $O(Delta/k)+log^* n$ rounds, where $Delta$ is the maximum degree of the network graph and $1leq kleq O(Delta)$ can be freely chosen. The algorithm is extremely simple: Each node locally computes a sequence of colors and then it tries colors from the sequence in batches of size $k$. Our algorithm subsumes many important results in the history of distributed graph coloring as special cases, including Linials color reduction [Linial, FOCS87], the celebrated locally iterative algorithm from [Barenboim, Elkin, Goldenberg, PODC18], and various algorithms to compute defective and arbdefective colorings. Our algorithm can smoothly scale between these and also simplifies the state of the art $(Delta+1)$-coloring algorithm. At the cost of losing the full algorithms simplicity we also provide a $O(kDelta)$-coloring algorithm in $O(sqrt{Delta/k})+log^* n$ rounds. We also provide improved deterministic algorithms for ruling sets, and, additionally, we provide a tight characterization for one-round color reduction algorithms.
Distributed vertex coloring is one of the classic problems and probably also the most widely studied problems in the area of distributed graph algorithms. We present a new randomized distributed vertex coloring algorithm for the standard CONGEST mode l, where the network is modeled as an $n$-node graph $G$, and where the nodes of $G$ operate in synchronous communication rounds in which they can exchange $O(log n)$-bit messages over all the edges of $G$. For graphs with maximum degree $Delta$, we show that the $(Delta+1)$-list coloring problem (and therefore also the standard $(Delta+1)$-coloring problem) can be solved in $O(log^5log n)$ rounds. Previously such a result was only known for the significantly more powerful LOCAL model, where in each round, neighboring nodes can exchange messages of arbitrary size. The best previous $(Delta+1)$-coloring algorithm in the CONGEST model had a running time of $O(logDelta + log^6log n)$ rounds. As a function of $n$ alone, the best previous algorithm therefore had a round complexity of $O(log n)$, which is a bound that can also be achieved by a na{i}ve folklore algorithm. For large maximum degree $Delta$, our algorithm hence is an exponential improvement over the previous state of the art.
This paper considers the modeling and the analysis of the performance of lock-free concurrent data structures. Lock-free designs employ an optimistic conflict control mechanism, allowing several processes to access the shared data object at the same time. They guarantee that at least one concurrent operation finishes in a finite number of its own steps regardless of the state of the operations. Our analysis considers such lock-free data structures that can be represented as linear combinations of fixed size retry loops. Our main contribution is a new way of modeling and analyzing a general class of lock-free algorithms, achieving predictions of throughput that are close to what we observe in practice. We emphasize two kinds of conflicts that shape the performance: (i) hardware conflicts, due to concurrent calls to atomic primitives; (ii) logical conflicts, caused by simultaneous operations on the shared data structure. We show how to deal with these hardware and logical conflicts separately, and how to combine them, so as to calculate the throughput of lock-free algorithms. We propose also a common framework that enables a fair comparison between lock-free implementations by covering the whole contention domain, together with a better understanding of the performance impacting factors. This part of our analysis comes with a method for calculating a good back-off strategy to finely tune the performance of a lock-free algorithm. Our experimental results, based on a set of widely used concurrent data structures and on abstract lock-free designs, show that our analysis follows closely the actual code behavior.
We present a simple deterministic distributed algorithm that computes a $(Delta+1)$-vertex coloring in $O(log^2 Delta cdot log n)$ rounds. The algorithm can be implemented with $O(log n)$-bit messages. The algorithm can also be extended to the more g eneral $(degree+1)$-list coloring problem. Obtaining a polylogarithmic-time deterministic algorithm for $(Delta+1)$-vertex coloring had remained a central open question in the area of distributed graph algorithms since the 1980s, until a recent network decomposition algorithm of Rozhov{n} and Ghaffari [STOC20]. The current state of the art is based on an improved variant of their decomposition, which leads to an $O(log^5 n)$-round algorithm for $(Delta+1)$-vertex coloring. Our coloring algorithm is completely different and considerably simpler and faster. It solves the coloring problem in a direct way, without using network decomposition, by gradually rounding a certain fractional color assignment until reaching an integral color assignments. Moreover, via the approach of Chang, Li, and Pettie [STOC18], this improved deterministic algorithm also leads to an improvement in the complexity of randomized algorithms for $(Delta+1)$-coloring, now reaching the bound of $O(log^3log n)$ rounds. As a further application, we also provide faster deterministic distributed algorithms for the following variants of the vertex coloring problem. In graphs of arboricity $a$, we show that a $(2+epsilon)a$-vertex coloring can be computed in $O(log^3 acdotlog n)$ rounds. We also show that for $Deltageq 3$, a $Delta$-coloring of a $Delta$-colorable graph $G$ can be computed in $O(log^2 Deltacdotlog^2 n)$ rounds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا