ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Randomized Test-And-Set Implementations

326   0   0.0 ( 0 )
 نشر من قبل Philipp Woelfel
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study randomized test-and-set (TAS) implementations from registers in the asynchronous shared memory model with n processes. We introduce the problem of group election, a natural variant of leader election, and propose a framework for the implementation of TAS objects from group election objects. We then present two group election algorithms, each yielding an efficient TAS implementation. The first implementation has expected max-step complexity $O(log^ast k)$ in the location-oblivious adversary model, and the second has expected max-step complexity $O(loglog k)$ against any read/write-oblivious adversary, where $kleq n$ is the contention. These algorithms improve the previous upper bound by Alistarh and Aspnes [2] of $O(loglog n)$ expected max-step complexity in the oblivious adversary model. We also propose a modification to a TAS algorithm by Alistarh, Attiya, Gilbert, Giurgiu, and Guerraoui [5] for the strong adaptive adversary, which improves its space complexity from super-linear to linear, while maintaining its $O(log n)$ expected max-step complexity. We then describe how this algorithm can be combined with any randomized TAS algorithm that has expected max-step complexity $T(n)$ in a weaker adversary model, so that the resulting algorithm has $O(log n)$ expected max-step complexity against any strong adaptive adversary and $O(T(n))$ in the weaker adversary model. Finally, we prove that for any randomized 2-process TAS algorithm, there exists a schedule determined by an oblivious adversary such that with probability at least $(1/4)^t$ one of the processes needs at least t steps to finish its TAS operation. This complements a lower bound by Attiya and Censor-Hillel [7] on a similar problem for $ngeq 3$ processes.

قيم البحث

اقرأ أيضاً

The test-and-set object is a fundamental synchronization primitive for shared memory systems. A test-and-set object stores a bit, initialized to 0, and supports one operation, test&set(), which sets the bits value to 1 and returns its previous value. This paper studies the number of atomic registers required to implement a test-and-set object in the standard asynchronous shared memory model with n processes. The best lower bound is log(n)-1 for obstruction-free (Giakkoupis and Woelfel, 2012) and deadlock-free (Styer and Peterson, 1989) implementations. Recently a deterministic obstruction-free implementation using O(sqrt(n)) registers was presented (Giakkoupis, Helmi, Higham, and Woelfel, 2013). This paper closes the gap between these known upper and lower bounds by presenting a deterministic obstruction-free implementation of a test-and-set object from Theta(log n) registers of size Theta(log n) bits. We also provide a technique to transform any deterministic obstruction-free algorithm, in which, from any configuration, any process can finish if it runs for b steps without interference, into a randomized wait-free algorithm for the oblivious adversary, in which the expected step complexity is polynomial in n and b. This transformation allows us to combine our obstruction-free algorithm with the randomized test-and-set algorithm by Giakkoupis and Woelfel (2012), to obtain a randomized wait-free test-and-set algorithm from Theta(log n) registers, with expected step-complexity Theta(log* n) against the oblivious adversary.
Strong adversaries obtain additional power when a linearizable object is substituted instead of an atomic object in a concurrent program. This paper suggests a novel approach to blunting this additional power, without relying on strongly linearizable implementations. Instead, a simple modification of some existing linearizable implementations is proposed with the property that if a concurrent program has non-zero termination probability when used with atomic objects, then it also has non-zero termination probability when it is used with the modified linearizable implementations. Our results apply to the ABD implementation of a shared register in asynchronous message-passing systems and also to AAD+ linearizable snapshots in asynchronous shared-memory systems.
Approaches for testing sets of variants, such as a set of rare or common variants within a gene or pathway, for association with complex traits are important. In particular, set tests allow for aggregation of weak signal within a set, can capture int erplay among variants, and reduce the burden of multiple hypothesis testing. Until now, these approaches did not address confounding by family relatedness and population structure, a problem that is becoming more important as larger data sets are used to increase power. Results: We introduce a new approach for set tests that handles confounders. Our model is based on the linear mixed model and uses two random effects-one to capture the set association signal and one to capture confounders. We also introduce a computational speedup for two-random-effects models that makes this approach feasible even for extremely large cohorts. Using this model with both the likelihood ratio test and score test, we find that the former yields more power while controlling type I error. Application of our approach to richly structured GAW14 data demonstrates that our method successfully corrects for population structure and family relatedness, while application of our method to a 15,000 individual Crohns disease case-control cohort demonstrates that it additionally recovers genes not recoverable by univariate analysis. Availability: A Python-based library implementing our approach is available at http://mscompbio.codeplex.com
In this paper, we perform an ablation study of eatfa, a neuro-evolved foraging algorithm that has recently been shown to forage efficiently under different resource distributions. Through selective disabling of input signals, we identify a emph{suff iciently} minimal set of input features that contribute the most towards determining search trajectories which favor high resource collection rates. Our experiments reveal that, independent of how the resources are distributed in the arena, the signals involved in imparting the controller the ability to switch from searching of resources to transporting them back to the nest are the most critical. Additionally, we find that pheromones play a key role in boosting performance of the controller by providing signals for informed locomotion in search for unforaged resources.
Piecewise Linear Approximation (PLA) is a well-established tool to reduce the size of the representation of time series by approximating the series by a sequence of line segments while keeping the error introduced by the approximation within some pre determined threshold. With the recent rise of edge computing, PLA algorithms find a complete new set of applications with the emphasis on reducing the volume of streamed data. In this study, we identify two scenarios set in a data-stream processing context: data reduction in sensor transmissions and datacenter storage. In connection to those scenarios, we identify several streaming metrics and propose streaming protocols as algorithmic implementations of several state of the art PLA techniques. In an experimental evaluation, we measure the quality of the reviewed methods and protocols and evaluate their performance against those streaming statistics. All known methods have deficiencies when it comes to handling streaming-like data, e.g. inflation of the input stream, high latency or poor average error. Our experimental results highlight the challenges raised when transferring those classical methods into the stream processing world and present alternative techniques to overcome them and balance the related trade-offs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا