No Arabic abstract
The test-and-set object is a fundamental synchronization primitive for shared memory systems. A test-and-set object stores a bit, initialized to 0, and supports one operation, test&set(), which sets the bits value to 1 and returns its previous value. This paper studies the number of atomic registers required to implement a test-and-set object in the standard asynchronous shared memory model with n processes. The best lower bound is log(n)-1 for obstruction-free (Giakkoupis and Woelfel, 2012) and deadlock-free (Styer and Peterson, 1989) implementations. Recently a deterministic obstruction-free implementation using O(sqrt(n)) registers was presented (Giakkoupis, Helmi, Higham, and Woelfel, 2013). This paper closes the gap between these known upper and lower bounds by presenting a deterministic obstruction-free implementation of a test-and-set object from Theta(log n) registers of size Theta(log n) bits. We also provide a technique to transform any deterministic obstruction-free algorithm, in which, from any configuration, any process can finish if it runs for b steps without interference, into a randomized wait-free algorithm for the oblivious adversary, in which the expected step complexity is polynomial in n and b. This transformation allows us to combine our obstruction-free algorithm with the randomized test-and-set algorithm by Giakkoupis and Woelfel (2012), to obtain a randomized wait-free test-and-set algorithm from Theta(log n) registers, with expected step-complexity Theta(log* n) against the oblivious adversary.
We study randomized test-and-set (TAS) implementations from registers in the asynchronous shared memory model with n processes. We introduce the problem of group election, a natural variant of leader election, and propose a framework for the implementation of TAS objects from group election objects. We then present two group election algorithms, each yielding an efficient TAS implementation. The first implementation has expected max-step complexity $O(log^ast k)$ in the location-oblivious adversary model, and the second has expected max-step complexity $O(loglog k)$ against any read/write-oblivious adversary, where $kleq n$ is the contention. These algorithms improve the previous upper bound by Alistarh and Aspnes [2] of $O(loglog n)$ expected max-step complexity in the oblivious adversary model. We also propose a modification to a TAS algorithm by Alistarh, Attiya, Gilbert, Giurgiu, and Guerraoui [5] for the strong adaptive adversary, which improves its space complexity from super-linear to linear, while maintaining its $O(log n)$ expected max-step complexity. We then describe how this algorithm can be combined with any randomized TAS algorithm that has expected max-step complexity $T(n)$ in a weaker adversary model, so that the resulting algorithm has $O(log n)$ expected max-step complexity against any strong adaptive adversary and $O(T(n))$ in the weaker adversary model. Finally, we prove that for any randomized 2-process TAS algorithm, there exists a schedule determined by an oblivious adversary such that with probability at least $(1/4)^t$ one of the processes needs at least t steps to finish its TAS operation. This complements a lower bound by Attiya and Censor-Hillel [7] on a similar problem for $ngeq 3$ processes.
Given a boolean predicate $Pi$ on labeled networks (e.g., proper coloring, leader election, etc.), a self-stabilizing algorithm for $Pi$ is a distributed algorithm that can start from any initial configuration of the network (i.e., every node has an arbitrary value assigned to each of its variables), and eventually converge to a configuration satisfying $Pi$. It is known that leader election does not have a deterministic self-stabilizing algorithm using a constant-size register at each node, i.e., for some networks, some of their nodes must have registers whose sizes grow with the size $n$ of the networks. On the other hand, it is also known that leader election can be solved by a deterministic self-stabilizing algorithm using registers of $O(log log n)$ bits per node in any $n$-node bounded-degree network. We show that this latter space complexity is optimal. Specifically, we prove that every deterministic self-stabilizing algorithm solving leader election must use $Omega(log log n)$-bit per node registers in some $n$-node networks. In addition, we show that our lower bounds go beyond leader election, and apply to all problems that cannot be solved by anonymous algorithms.
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which *arbitrarily many* edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combination of (1) having an $O(1)$ amortized time complexity, (2) using only $O(log{n})$-bit messages, (3) not posing any restrictions on the dynamic behavior of the environment, (4) being deterministic, (5) having strong guarantees for intermediate solutions, and (6) being applicable for a wide family of tasks. The tasks for which we deduce such an algorithm are maximal matching, $(degree+1)$-coloring, 2-approximation for minimum weight vertex cover, and maximal independent set (which is the most subtle case). For some of these tasks, node insertions can also be among the allowed topology changes, and for some of them also abrupt node deletions.
We show that there exists a Boolean function $F$ which observes the following separations among deterministic query complexity $(D(F))$, randomized zero error query complexity $(R_0(F))$ and randomized one-sided error query complexity $(R_1(F))$: $R_1(F) = widetilde{O}(sqrt{D(F)})$ and $R_0(F)=widetilde{O}(D(F))^{3/4}$. This refutes the conjecture made by Saks and Wigderson that for any Boolean function $f$, $R_0(f)=Omega({D(f)})^{0.753..}$. This also shows widest separation between $R_1(f)$ and $D(f)$ for any Boolean function. The function $F$ was defined by G{{o}}{{o}}s, Pitassi and Watson who studied it for showing a separation between deterministic decision tree complexity and unambiguous non-deterministic decision tree complexity. Independently of us, Ambainis et al proved that different variants of the function $F$ certify optimal (quadratic) separation between $D(f)$ and $R_0(f)$, and polynomial separation between $R_0(f)$ and $R_1(f)$. Viewed as separation results, our results are subsumed by those of Ambainis et al. However, while the functions considerd in the work of Ambainis et al are different variants of $F$, we work with the original function $F$ itself.
The known linear-time kernelizations for $d$-Hitting Set guarantee linear worst-case running times using a quadratic-size data structure (that is not fully initialized). Getting rid of this data structure, we show that problem kernels of asymptotically optimal size $O(k^d)$ for $d$-Hitting Set are computable in linear time and space. Additionally, we experimentally compare the linear-time kernelizations for $d$-Hitting Set to each other and to a classical data reduction algorithm due to Weihe.