ترغب بنشر مسار تعليمي؟ اضغط هنا

Global-Local View: Scalable Consistency for Concurrent Data Types

70   0   0.0 ( 0 )
 نشر من قبل Carlos Baquero
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Concurrent linearizable access to shared objects can be prohibitively expensive in a high contention workload. Many applications apply ad-hoc techniques to eliminate the need of synchronous atomic updates, which may result in non-linearizable implementations. We propose a new programming model which leverages such patterns for concurrent access to objects in a shared memory system. In this model, each thread maintains different views on the shared object - a thread-local view and a global view. As the thread-local view is not shared, it can be updated without incurring synchronization costs. These local updates become visible to other threads only after the thread-local view is merged with the global view. This enables better performance at the expense of linearizability. We show that it is possible to maintain thread-local views and to perform merge efficiently for several data types and evaluate their performance and scalability compared to linearizable implementations. Further, we discuss the consistency semantics of the data types and the associated programming model.



قيم البحث

اقرأ أيضاً

We present a fully lock-free variant of the recent Montage system for persistent data structures. Our variant, nbMontage, adds persistence to almost any nonblocking concurrent structure without introducing significant overhead or blocking of any kind . Like its predecessor, nbMontage is buffered durably linearizable: it guarantees that the state recovered in the wake of a crash will represent a consistent prefix of pre-crash execution. Unlike its predecessor, nbMontage ensures wait-free progress of the persistence frontier, thereby bounding the number of recent updates that may be lost on a crash, and allowing a thread to force an update of the frontier (i.e., to perform a sync operation) without the risk of blocking. As an extra benefit, the helping mechanism employed by our wait-free sync significantly reduces its latency. Performance results for nonblocking queues, skip lists, trees, and hash tables rival custom data structures in the literature -- dramatically faster than achieved with prior general-purpose systems, and generally within 50% of equivalent non-persistent structures placed in DRAM.
Concurrent data structures are the data sharing side of parallel programming. Data structures give the means to the program to store data, but also provide operations to the program to access and manipulate these data. These operations are implemente d through algorithms that have to be efficient. In the sequential setting, data structures are crucially important for the performance of the respective computation. In the parallel programming setting, their importance becomes more crucial because of the increased use of data and resource sharing for utilizing parallelism. The first and main goal of this chapter is to provide a sufficient background and intuition to help the interested reader to navigate in the complex research area of lock-free data structures. The second goal is to offer the programmer familiarity to the subject that will allow her to use truly concurrent methods.
We improve the fundamental security threshold of eventual consensus Proof-of-Stake (PoS) blockchain protocols under the longest-chain rule by showing, for the first time, the positive effect of rounds with concurrent honest leaders. Current securit y analyses reduce consistency to the dynamics of an abstract, round-based block creation process that is determined by three events associated with a round: (i) event $A$: at least one adversarial leader, (ii) event $S$: a single honest leader, and (iii) event $M$: multiple, but honest, leaders. We present an asymptotically optimal consistency analysis assuming that an honest round is more likely than an adversarial round (i.e., $Pr[S] + Pr[M] > Pr[A]$); this threshold is optimal. This is a first in the literature and can be applied to both the simple synchronous communication as well as communication with bounded delays. In all existing consistency analyses, event $M$ is either penalized or treated neutrally. Specifically, the consistency analyses in Ouroboros Praos (Eurocrypt 2018) and Genesis (CCS 2018) assume that $Pr[S] - Pr[M] > Pr[A]$; the analyses in Sleepy Consensus (Asiacrypt 2017) and Snow White (Fin. Crypto 2019) assume that $Pr[S] > Pr[A]$. Moreover, all existing analyses completely break down when $Pr[S] < Pr[A]$. These thresholds determine the critical trade-off between the honest majority, network delays, and consistency error. Our new results can be directly applied to improve the security guarantees of the existing protocols. We also provide an efficient algorithm to explicitly calculate these error probabilities in the synchronous setting. Furthermore, we complement these results by analyzing the setting where $S$ is rare, even allowing $Pr[S] = 0$, under the added assumption that honest players adopt a consistent chain selection rule.
This paper explains the scalable methods used for extracting and analyzing the Covid-19 vaccine data. Using Big Data such as Hadoop and Hive, we collect and analyze the massive data set of the confirmed, the fatality, and the vaccination data set of Covid-19. The data size is about 3.2 Giga-Byte. We show that it is possible to store and process massive data with Big Data. The paper proceeds tempo-spatial analysis, and visual maps, charts, and pie charts visualize the result of the investigation. We illustrate that the more vaccinated, the fewer the confirmed cases.
We present an approach for efficiently taking snapshots of the state of a collection of CAS objects. Taking a snapshot allows later operations to read the value that each CAS object had at the time the snapshot was taken. Taking a snapshot requires a constant number of steps and returns a handle to the snapshot. Reading a snapshotted value of an individual CAS object using this handle is wait-free, taking time proportional to the number of successful CASes on the object since the snapshot was taken. Our fast, flexible snapshots yield simple, efficient implementations of atomic multi-point queries on concurrent data structures built from CAS objects. For example, in a search tree where child pointers are updated using CAS, once a snapshot is taken, one can atomically search for ranges of keys, find the first key that matches some criteria, or check if a collection of keys are all present, simply by running a standard sequential algorithm on a snapshot of the tree. To evaluate the performance of our approach, we apply it to two search trees, one balanced and one not. Experiments show that the overhead of supporting snapshots is low across a variety of workloads. Moreover, in almost all cases, range queries on the trees built from our snapshots perform as well as or better than state-of-the-art concurrent data structures that support atomic range queries.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا