Do you want to publish a course? Click here

Linearizable Implementations Suffice for Termination of Randomized Concurrent Programs

88   0   0.0 ( 0 )
 Added by Jennifer Welch
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Strong adversaries obtain additional power when a linearizable object is substituted instead of an atomic object in a concurrent program. This paper suggests a novel approach to blunting this additional power, without relying on strongly linearizable implementations. Instead, a simple modification of some existing linearizable implementations is proposed with the property that if a concurrent program has non-zero termination probability when used with atomic objects, then it also has non-zero termination probability when it is used with the modified linearizable implementations. Our results apply to the ABD implementation of a shared register in asynchronous message-passing systems and also to AAD+ linearizable snapshots in asynchronous shared-memory systems.



rate research

Read More

We study randomized test-and-set (TAS) implementations from registers in the asynchronous shared memory model with n processes. We introduce the problem of group election, a natural variant of leader election, and propose a framework for the implementation of TAS objects from group election objects. We then present two group election algorithms, each yielding an efficient TAS implementation. The first implementation has expected max-step complexity $O(log^ast k)$ in the location-oblivious adversary model, and the second has expected max-step complexity $O(loglog k)$ against any read/write-oblivious adversary, where $kleq n$ is the contention. These algorithms improve the previous upper bound by Alistarh and Aspnes [2] of $O(loglog n)$ expected max-step complexity in the oblivious adversary model. We also propose a modification to a TAS algorithm by Alistarh, Attiya, Gilbert, Giurgiu, and Guerraoui [5] for the strong adaptive adversary, which improves its space complexity from super-linear to linear, while maintaining its $O(log n)$ expected max-step complexity. We then describe how this algorithm can be combined with any randomized TAS algorithm that has expected max-step complexity $T(n)$ in a weaker adversary model, so that the resulting algorithm has $O(log n)$ expected max-step complexity against any strong adaptive adversary and $O(T(n))$ in the weaker adversary model. Finally, we prove that for any randomized 2-process TAS algorithm, there exists a schedule determined by an oblivious adversary such that with probability at least $(1/4)^t$ one of the processes needs at least t steps to finish its TAS operation. This complements a lower bound by Attiya and Censor-Hillel [7] on a similar problem for $ngeq 3$ processes.
This paper proposes a general framework for adding linearizable iterators to a class of data structures that implement set operations. We introduce a condition on set operations, called local consistency, which informally states that set operations never make elements unreachable to a sequential iterators traversal. We show that sets with locally consistent operations can be augmented with a linearizable iterator via the framework. Our technique is broadly applicable to a variety of data structures, including hash tables and binary search trees. We apply the technique to sets taken from existing literature, prove their operations are locally consistent, and demonstrate that iterators do not significantly affect the performance of concurrent set operations.
The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gamblers Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature.
We present an efficient approach to prove termination of monotone programs with integer variables, an expressive class of loops that is often encountered in computer programs. Our approach is based on a lightweight static analysis method and takes advantage of simple %nice properties of monotone functions. Our preliminary implementation %beats shows that our tool has an advantage over existing tools and can prove termination for a high percentage of loops for a class of benchmarks.
It has been observed that linearizability, the prevalent consistency condition for implementing concurrent objects, does not preserve some probability distributions. A stronger condition, called strong linearizability has been proposed, but its study has been somewhat ad-hoc. This paper investigates strong linearizability by casting it in the context of observational refinement of objects. We present a strengthening of observational refinement, which generalizes strong linearizability, obtaining several important implications. When a concrete concurrent object refining another, more abstract object - often sequential - the correctness of a program employing the concrete object can be verified by considering its behaviors when using the more abstract object. This means that trace properties of a program using the concrete object can be proved by considering the program with the abstract object. This, however, does not hold for hyperproperties, including many security properties and probability distributions of events. We define strong observational refinement, a strengthening of refinement that preserves hyperproperties, and prove that it is equivalent to the existence of forward simulations. We show that strong observational refinement generalizes strong linearizability. This implies that strong linearizability is also equivalent to forward simulation, and shows that strongly linearizable implementations can be composed both horizontally (i.e., locality) and vertically (i.e., with instantiation). For situations where strongly linearizable implementations do not exist (or are less efficient), we argue that reasoning about hyperproperties of programs can be simplified by strong observational refinement of abstract objects that are not necessarily sequential.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا