Do you want to publish a course? Click here

Self-Stabilizing Wavelets and r-Hops Coordination

110   0   0.0 ( 0 )
 Added by Franck Petit
 Publication date 2007
and research's language is English




Ask ChatGPT about the research

We introduce a simple tool called the wavelet (or, r-wavelet) scheme. Wavelets deals with coordination among processes which are at most r hops away of each other. We present a selfstabilizing solution for this scheme. Our solution requires no underlying structure and works in arbritrary anonymous networks, i.e., no process identifier is required. Moreover, our solution works under any (even unfair) daemon. Next, we use the wavelet scheme to design self-stabilizing layer clocks. We show that they provide an efficient device in the design of local coordination problems at distance r, i.e., r-barrier synchronization and r-local resource allocation (LRA) such as r-local mutual exclusion (LME), r-group mutual exclusion (GME), and r-Reader/Writers. Some solutions to the r-LRA problem (e.g., r-LME) also provide transformers to transform algorithms written assuming any r-central daemon into algorithms working with any distributed daemon.



rate research

Read More

We present a novel self-stabilizing algorithm for minimum spanning tree (MST) construction. The space complexity of our solution is $O(log^2n)$ bits and it converges in $O(n^2)$ rounds. Thus, this algorithm improves the convergence time of all previously known self-stabilizing asynchronous MST algorithms by a multiplicative factor $Theta(n)$, to the price of increasing the best known space complexity by a factor $O(log n)$. The main ingredient used in our algorithm is the design, for the first time in self-stabilizing settings, of a labeling scheme for computing the nearest common ancestor with only $O(log^2n)$ bits.
Given a boolean predicate $Pi$ on labeled networks (e.g., proper coloring, leader election, etc.), a self-stabilizing algorithm for $Pi$ is a distributed algorithm that can start from any initial configuration of the network (i.e., every node has an arbitrary value assigned to each of its variables), and eventually converge to a configuration satisfying $Pi$. It is known that leader election does not have a deterministic self-stabilizing algorithm using a constant-size register at each node, i.e., for some networks, some of their nodes must have registers whose sizes grow with the size $n$ of the networks. On the other hand, it is also known that leader election can be solved by a deterministic self-stabilizing algorithm using registers of $O(log log n)$ bits per node in any $n$-node bounded-degree network. We show that this latter space complexity is optimal. Specifically, we prove that every deterministic self-stabilizing algorithm solving leader election must use $Omega(log log n)$-bit per node registers in some $n$-node networks. In addition, we show that our lower bounds go beyond leader election, and apply to all problems that cannot be solved by anonymous algorithms.
81 - Iosif Salem 2017
Vector clock algorithms are basic wait-free building blocks that facilitate causal ordering of events. As wait-free algorithms, they are guaranteed to complete their operations within a finite number of steps. Stabilizing algorithms allow the system to recover after the occurrence of transient faults, such as soft errors and arbitrary violations of the assumptions according to which the system was designed to behave. We present the first, to the best of our knowledge, stabilizing vector clock algorithm for asynchronous crash-prone message-passing systems that can recover in a wait-free manner after the occurrence of transient faults. In these settings, it is challenging to demonstrate a finite and wait-free recovery from (communication and crash failures as well as) transient faults, bound the message and storage sizes, deal with the removal of all stale information without blocking, and deal with counter overflow events (which occur at different network nodes concurrently). We present an algorithm that never violates safety in the absence of transient faults and provides bounded time recovery during fair executions that follow the last transient fault. The novelty is that in the absence of execution fairness, the algorithm guarantees a bound on the number of times in which the system might violate safety (while existing algorithms might block forever due to the presence of both transient faults and crash failures). Since vector clocks facilitate a number of elementary synchronization building blocks (without requiring remote replica synchronization) in asynchronous systems, we believe that our analytical insights are useful for the design of other systems that cannot guarantee execution fairness.
170 - Silvia Bonomi 2016
This paper proposes the first implementation of a self-stabilizing regular register emulated by $n$ servers that is tolerant to both mobile Byzantine agents, and emph{transient failures} in a round-free synchronous model. Differently from existing Mobile Byzantine tolerant register implementations, this paper considers a more powerful adversary where (i) the message delay (i.e., $delta$) and the period of mobile Byzantine agents movement (i.e., $Delta$) are completely decoupled and (ii) servers are not aware of their state i.e., they do not know if they have been corrupted or not by a mobile Byzantine agent.The proposed protocol tolerates emph{(i)} any number of transient failures, and emph{(ii)} up to $f$ Mobile Byzantine agents. In addition, our implementation uses bounded timestamps from the $mathcal{Z}_{13}$ domain and it is optimal with respect to the number of servers needed to tolerate $f$ mobile Byzantine agents in the given model.
We study the problem of privately emulating shared memory in message-passing networks. The system includes clients that store and retrieve replicated information on N servers, out of which e are malicious. When a client access a malicious server, the data field of that server response might be different than the value it originally stored. However, all other control variables in the server reply and protocol actions are according to the server algorithm. For the coded atomic storage (CAS) algorithms by Cadambe et al., we present an enhancement that ensures no information leakage and malicious fault-tolerance. We also consider recovery after the occurrence of transient faults that violate the assumptions according to which the system is to behave. After their last occurrence, transient faults leave the system in an arbitrary state (while the program code stays intact). We present a self-stabilizing algorithm, which recovers after the occurrence of transient faults. This addition to Cadambe et al. considers asynchronous settings as long as no transient faults occur. The recovery from transient faults that bring the system counters (close) to their maximal values may include the use of a global reset procedure, which requires the system run to be controlled by a fair scheduler. After the recovery period, the safety properties are provided for asynchronous system runs that are not necessarily controlled by fair schedulers. Since the recovery period is bounded and the occurrence of transient faults is extremely rare, we call this design criteria self-stabilization in the presence of seldom fairness. Our self-stabilizing algorithm uses a bounded storage during asynchronous executions (that are not necessarily fair). To the best of our knowledge, we are the first to address privacy and self-stabilization in the context of emulating atomic shared memory in networked systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا