Do you want to publish a course? Click here

The Topology of Randomized Symmetry-Breaking Distributed Computing

78   0   0.0 ( 0 )
 Added by Ran Gelles
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Studying distributed computing through the lens of algebraic topology has been the source of many significant breakthroughs during the last two decades, especially in the design of lower bounds or impossibility results for deterministic algorithms. This paper aims at studying randomized synchronous distributed computing through the lens of algebraic topology. We do so by studying the wide class of (input-free) symmetry-breaking tasks, e.g., leader election, in synchronous fault-free anonymous systems. We show that it is possible to redefine solvability of a task locally, i.e., for each simplex of the protocol complex individually, without requiring any global consistency. However, this approach has a drawback: it eliminates the topological aspect of the computation, since a single facet has a trivial topological structure. To overcome this issue, we introduce a projection $pi$ of both protocol and output complexes, where every simplex $sigma$ is mapped to a complex $pi(sigma)$; the later has a rich structure that replaces the structure we lost by considering one single facet at a time. To show the significance and applicability of our topological approach, we derive necessary and sufficient conditions for solving leader election in synchronous fault-free anonymous shared-memory and message-passing models. In both models, we consider scenarios in which there might be correlations between the random values provided to the nodes. In particular, different parties might have access to the same randomness source so their randomness is not independent but equal. Interestingly, we find that solvability of leader election relates to the number of parties that possess correlated randomness, either directly or via their greatest common divisor, depending on the specific communication model.

rate research

Read More

A graph is weakly $2$-colored if the nodes are labeled with colors black and white such that each black node is adjacent to at least one white node and vice versa. In this work we study the distributed computational complexity of weak $2$-coloring in the standard LOCAL model of distributed computing, and how it is related to the distributed computational complexity of other graph problems. First, we show that weak $2$-coloring is a minimal distributed symmetry-breaking problem for regular even-degree trees and high-girth graphs: if there is any non-trivial locally checkable labeling problem that is solvable in $o(log^* n)$ rounds with a distributed graph algorithm in the middle of a regular even-degree tree, then weak $2$-coloring is also solvable in $o(log^* n)$ rounds there. Second, we prove a tight lower bound of $Omega(log^* n)$ for the distributed computational complexity of weak $2$-coloring in regular trees; previously only a lower bound of $Omega(log log^* n)$ was known. By minimality, the same lower bound holds for any non-trivial locally checkable problem inside regular even-degree trees.
Maximum weight matching is one of the most fundamental combinatorial optimization problems with a wide range of applications in data mining and bioinformatics. Developing distributed weighted matching algorithms is challenging due to the sequential nature of efficient algorithms for this problem. In this paper, we develop a simple distributed algorithm for the problem on general graphs with approximation guarantee of $2+varepsilon$ that (nearly) matches that of the sequential greedy algorithm. A key advantage of this algorithm is that it can be easily implemented in only two rounds of computation in modern parallel computation frameworks such as MapReduce. We also demonstrate the efficiency of our algorithm in practice on various graphs (some with half a trillion edges) by achieving objective values always close to what is achievable in the centralized setting.
A fundamental problem in distributed computing is the task of cooperatively executing a given set of $t$ tasks by $p$ processors where the communication medium is dynamic and subject to failures. The dynamics of the communication medium lead to groups of processors being disconnected and possibly reconnected during the entire course of the computation furthermore tasks can have dependencies among them. In this paper, we present a randomized algorithm whose competitive ratio is dependent on the dynamics of the communication medium and also on the nature of the dependencies among the tasks.
141 - Pierre Fraigniaud , Ami Paz 2020
Modeling distributed computing in a way enabling the use of formal methods is a challenge that has been approached from different angles, among which two techniques emerged at the turn of the century: protocol complexes, and directed algebraic topology. In both cases, the considered computational model generally assumes communication via shared objects, typically a shared memory consisting of a collection of read-write registers. Our paper is concerned with network computing, where the processes are located at the nodes of a network, and communicate by exchanging messages along the edges of that network. Applying the topological approach for verification in network computing is a considerable challenge, mainly because the presence of identifiers assigned to the nodes yields protocol complexes whose size grows exponentially with the size of the underlying network. However, many of the problems studied in this context are of local nature, and their definitions do not depend on the identifiers or on the size of the network. We leverage this independence in order to meet the above challenge, and present $textit{local}$ protocol complexes, whose sizes do not depend on the size of the network. As an application of the design of compact protocol complexes, we reformulate the celebrated lower bound of $Omega(log^*n)$ rounds for 3-coloring the $n$-node ring, in the algebraic topology framework.
This paper shows for the first time that distributed computing can be both reliable and efficient in an environment that is both highly dynamic and hostile. More specifically, we show how to maintain clusters of size $O(log N)$, each containing more than two thirds of honest nodes with high probability, within a system whose size can vary textit{polynomially} with respect to its initial size. Furthermore, the communication cost induced by each node arrival or departure is polylogarithmic with respect to $N$, the maximal size of the system. Our clustering can be achieved despite the presence of a Byzantine adversary controlling a fraction $bad leq {1}{3}-epsilon$ of the nodes, for some fixed constant $epsilon > 0$, independent of $N$. So far, such a clustering could only be performed for systems who size can vary constantly and it was not clear whether that was at all possible for polynomial variances.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا