Do you want to publish a course? Click here

Near-Optimal Self-Stabilising Counting and Firing Squads

62   0   0.0 ( 0 )
 Added by Joel Rybicki
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Consider a fully-connected synchronous distributed system consisting of $n$ nodes, where up to $f$ nodes may be faulty and every node starts in an arbitrary initial state. In the synchronous $C$-counting problem, all nodes need to eventually agree on a counter that is increased by one modulo $C$ in each round for given $C>1$. In the self-stabilising firing squad problem, the task is to eventually guarantee that all non-faulty nodes have simultaneous responses to external inputs: if a subset of the correct nodes receive an external go signal as input, then all correct nodes should agree on a round (in the not-too-distant future) in which to jointly output a fire signal. Moreover, no node should generate a fire signal without some correct node having previously received a go signal as input. We present a framework reducing both tasks to binary consensus at very small cost. For example, we obtain a deterministic algorithm for self-stabilising Byzantine firing squads with optimal resilience $f<n/3$, asymptotically optimal stabilisation and response time $O(f)$, and message size $O(log f)$. As our framework does not restrict the type of consensus routines used, we also obtain efficient randomised solutions, and it is straightforward to adapt our framework for other types of permanent faults.



rate research

Read More

We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f<n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: - In the deterministic setting, the state-of-the-art solutions stabilise in time $Theta(f)$ and have each node broadcast $Theta(f log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $Theta(log f)$ while retaining the same stabilisation time. - In the randomised setting, the state-of-the-art solutions stabilise in time $Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $log^{O(1)} f$ while each node broadcasts $log^{O(1)} f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.
Consider a complete communication network of $n$ nodes, where the nodes receive a common clock pulse. We study the synchronous $c$-counting problem: given any starting state and up to $f$ faulty nodes with arbitrary behaviour, the task is to eventually have all correct nodes labeling the pulses with increasing values modulo $c$ in agreement. Thus, we are considering algorithms that are self-stabilising despite Byzantine failures. In this work, we give new algorithms for the synchronous counting problem that (1) are deterministic, (2) have optimal resilience, (3) have a linear stabilisation time in $f$ (asymptotically optimal), (4) use a small number of states, and consequently, (5) communicate a small number of bits per round. Prior algorithms either resort to randomisation, use a large number of states and need high communication bandwidth, or have suboptimal resilience. In particular, we achieve an exponential improvement in both state complexity and message size for deterministic algorithms. Moreover, we present two complementary approaches for reducing the number of bits communicated during and after stabilisation.
66 - Daniel Bosk 2020
Mass surveillance of the population by state agencies and corporate parties is now a well-known fact. Journalists and whistle-blowers still lack means to circumvent global spying for the sake of their investigations. With Spores, we propose a way for journalists and their sources to plan a posteriori file exchanges when they physically meet. We leverage on the multiplication of personal devices per capita to provide a lightweight, robust and fully anonymous decentralised file transfer protocol between users. Spores hinges on our novel concept of e-squads: ones personal devices, rendered intelligent by gossip communication protocols, can provide private and dependable services to their user. Peoples e-squads are federated into a novel onion routing network, able to withstand the inherent unreliability of personal appliances while providing reliable routing. Spores performances are competitive, and its privacy properties of the communication outperform state of the art onion routing strategies.
The coflow scheduling problem has emerged as a popular abstraction in the last few years to study data communication problems within a data center. In this basic framework, each coflow has a set of communication demands and the goal is to schedule many coflows in a manner that minimizes the total weighted completion time. A coflow is said to complete when all its communication needs are met. This problem has been extremely well studied for the case of complete bipartite graphs that model a data center with full bisection bandwidth and several approximation algorithms and effective heuristics have been proposed recently. In this work, we study a slightly different model of coflow scheduling in general graphs (to capture traffic between data centers) and develop practical and efficient approximation algorithms for it. Our main result is a randomized 2 approximation algorithm for the single path and free path model, significantly improving prior work. In addition, we demonstrate via extensive experiments that the algorithm is practical, easy to implement and performs well in practice.
This paper provides three nearly-optimal algorithms for scheduling $t$ jobs in the $mathsf{CLIQUE}$ model. First, we present a deterministic scheduling algorithm that runs in $O(mathsf{GlobalCongestion} + mathsf{dilation})$ rounds for jobs that are sufficiently efficient in terms of their memory. The $mathsf{dilation}$ is the maximum round complexity of any of the given jobs, and the $mathsf{GlobalCongestion}$ is the total number of messages in all jobs divided by the per-round bandwidth of $n^2$ of the $mathsf{CLIQUE}$ model. Both are inherent lower bounds for any scheduling algorithm. Then, we present a randomized scheduling algorithm which runs $t$ jobs in $O(mathsf{GlobalCongestion} + mathsf{dilation}cdotlog{n}+t)$ rounds and only requires that inputs and outputs do not exceed $O(nlog n)$ bits per node, which is met by, e.g., almost all graph problems. Lastly, we adjust the emph{random-delay-based} scheduling algorithm [Ghaffari, PODC15] from the $mathsf{CLIQUE}$ model and obtain an algorithm that schedules any $t$ jobs in $O(t / n + mathsf{LocalCongestion} + mathsf{dilation}cdotlog{n})$ rounds, where the $mathsf{LocalCongestion}$ relates to the congestion at a single node of the $mathsf{CLIQUE}$. We compare this algorithm to the previous approaches and show their benefit. We schedule the set of jobs on-the-fly, without a priori knowledge of its parameters or the communication patterns of the jobs. In light of the inherent lower bounds, all of our algorithms are nearly-optimal. We exemplify the power of our algorithms by analyzing the message complexity of the state-of-the-art MIS protocol [Ghaffari, Gouleakis, Konrad, Mitrovic and Rubinfeld, PODC18], and we show that we can solve $t$ instances of MIS in $O(t + loglogDeltalog{n})$ rounds, that is, in $O(1)$ amortized time, for $tgeq loglogDeltalog{n}$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا