Do you want to publish a course? Click here

Fast Self-Stabilizing Minimum Spanning Tree Construction

408   0   0.0 ( 0 )
 Added by Stephane Rovedakis
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

We present a novel self-stabilizing algorithm for minimum spanning tree (MST) construction. The space complexity of our solution is $O(log^2n)$ bits and it converges in $O(n^2)$ rounds. Thus, this algorithm improves the convergence time of all previously known self-stabilizing asynchronous MST algorithms by a multiplicative factor $Theta(n)$, to the price of increasing the best known space complexity by a factor $O(log n)$. The main ingredient used in our algorithm is the design, for the first time in self-stabilizing settings, of a labeling scheme for computing the nearest common ancestor with only $O(log^2n)$ bits.



rate research

Read More

253 - Lelia Blin 2009
The minimum spanning tree (MST) construction is a classical problem in Distributed Computing for creating a globally minimized structure distributedly. Self-stabilization is versatile technique for forward recovery that permits to handle any kind of transient faults in a unified manner. The loop-free property provides interesting safety assurance in dynamic networks where edge-cost changes during operation of the protocol. We present a new self-stabilizing MST protocol that improves on previous known ap- proaches in several ways. First, it makes fewer system hypotheses as the size of the network (or an upper bound on the size) need not be known to the participants. Second, it is loop-free in the sense that it guarantees that a spanning tree structure is always preserved while edge costs change dynamically and the protocol adjusts to a new MST. Finally, time complexity matches the best known results, while space complexity results show that this protocol is the most efficient to date.
170 - Silvia Bonomi 2016
This paper proposes the first implementation of a self-stabilizing regular register emulated by $n$ servers that is tolerant to both mobile Byzantine agents, and emph{transient failures} in a round-free synchronous model. Differently from existing Mobile Byzantine tolerant register implementations, this paper considers a more powerful adversary where (i) the message delay (i.e., $delta$) and the period of mobile Byzantine agents movement (i.e., $Delta$) are completely decoupled and (ii) servers are not aware of their state i.e., they do not know if they have been corrupted or not by a mobile Byzantine agent.The proposed protocol tolerates emph{(i)} any number of transient failures, and emph{(ii)} up to $f$ Mobile Byzantine agents. In addition, our implementation uses bounded timestamps from the $mathcal{Z}_{13}$ domain and it is optimal with respect to the number of servers needed to tolerate $f$ mobile Byzantine agents in the given model.
131 - Laurent Feuilloley 2019
A distributed proof (also known as local certification, or proof-labeling scheme) is a mechanism to certify that the solution to a graph problem is correct. It takes the form of an assignment of labels to the nodes, that can be checked locally. There exists such a proof for the minimum spanning tree problem, using $O(log n log W)$ bit labels (where $n$ is the number of nodes in the graph, and $W$ is the largest weight of an edge). This is due to Korman and Kutten who describe it in concise and formal manner in [Korman and Kutten 07]. In this note, we propose a more intuitive description of the result, as well as a gentle introduction to the problem.
Given a boolean predicate $Pi$ on labeled networks (e.g., proper coloring, leader election, etc.), a self-stabilizing algorithm for $Pi$ is a distributed algorithm that can start from any initial configuration of the network (i.e., every node has an arbitrary value assigned to each of its variables), and eventually converge to a configuration satisfying $Pi$. It is known that leader election does not have a deterministic self-stabilizing algorithm using a constant-size register at each node, i.e., for some networks, some of their nodes must have registers whose sizes grow with the size $n$ of the networks. On the other hand, it is also known that leader election can be solved by a deterministic self-stabilizing algorithm using registers of $O(log log n)$ bits per node in any $n$-node bounded-degree network. We show that this latter space complexity is optimal. Specifically, we prove that every deterministic self-stabilizing algorithm solving leader election must use $Omega(log log n)$-bit per node registers in some $n$-node networks. In addition, we show that our lower bounds go beyond leader election, and apply to all problems that cannot be solved by anonymous algorithms.
81 - Iosif Salem 2017
Vector clock algorithms are basic wait-free building blocks that facilitate causal ordering of events. As wait-free algorithms, they are guaranteed to complete their operations within a finite number of steps. Stabilizing algorithms allow the system to recover after the occurrence of transient faults, such as soft errors and arbitrary violations of the assumptions according to which the system was designed to behave. We present the first, to the best of our knowledge, stabilizing vector clock algorithm for asynchronous crash-prone message-passing systems that can recover in a wait-free manner after the occurrence of transient faults. In these settings, it is challenging to demonstrate a finite and wait-free recovery from (communication and crash failures as well as) transient faults, bound the message and storage sizes, deal with the removal of all stale information without blocking, and deal with counter overflow events (which occur at different network nodes concurrently). We present an algorithm that never violates safety in the absence of transient faults and provides bounded time recovery during fair executions that follow the last transient fault. The novelty is that in the absence of execution fairness, the algorithm guarantees a bound on the number of times in which the system might violate safety (while existing algorithms might block forever due to the presence of both transient faults and crash failures). Since vector clocks facilitate a number of elementary synchronization building blocks (without requiring remote replica synchronization) in asynchronous systems, we believe that our analytical insights are useful for the design of other systems that cannot guarantee execution fairness.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا