No Arabic abstract
We start by summarizing the recently proposed implementation of the first non-blocking concurrent interpolation search tree (C-IST) data structure. We then analyze the individual operations of the C-IST, and show that they are correct and linearizable. We furthermore show that lookup (and several other non-destructive operations) are wait-free, and that the insert and delete operations are lock-free. We continue by showing that the C-IST has the following properties. For arbitrary key distributions, this data structure ensures worst-case $O(log n + p)$ amortized time for search, insertion and deletion traversals. When the input key distributions are smooth, lookups run in expected $O(log log n + p)$ time, and insertion and deletion run in expected amortized $O(log log n + p)$ time, where $p$ is a bound on the number of threads. Finally, we present an extended experimental evaluation of the non-blocking IST performance.
This paper presents the first implementation of a search tree data structure in an asynchronous shared-memory system that provides a wait-free algorithm for executing range queries on the tree, in addition to non-blocking algorithms for Insert, Delete and Find, using single-word Compare-and-Swap (CAS). The implementation is linearizable and tolerates any number of crash failures. Insert and Delete operations that operate on different parts of the tree run fully in parallel (without any interference with one another). We employ a lightweight helping mechanism, where each Insert, Delete and Find operation helps only update operations that affect the local neighbourhood of the leaf it arrives at. Similarly, a Scan helps only those updates taking place on nodes of the part of the tree it traverses, and therefore Scans operating on different parts of the tree do not interfere with one another. Our implementation works in a dynamic system where the number of processes may change over time. The implementation builds upon the non-blocking binary search tree implementation presented by Ellen et al. (in PODC 2010) by applying a simple mechanism to make the tree persistent.
We consider the design of adaptive data structures for searching elements of a tree-structured space. We use a natural generalization of the rotation-based online binary search tree model in which the underlying search space is the set of vertices of a tree. This model is based on a simple structure for decomposing graphs, previously known under several names including elimination trees, vertex rankings, and tubings. The model is equivalent to the classical binary search tree model exactly when the underlying tree is a path. We describe an online $O(log log n)$-competitive search tree data structure in this model, matching the best known competitive ratio of binary search trees. Our method is inspired by Tango trees, an online binary search tree algorithm, but critically needs several new notions including one which we call Steiner-closed search trees, which may be of independent interest. Moreover our technique is based on a novel use of two levels of decomposition, first from search space to a set of Steiner-closed trees, and secondly from these trees into paths.
Recently Avis and Jordan have demonstrated the efficiency of a simple technique called budgeting for the parallelization of a number of tree search algorithms. The idea is to limit the amount of work that a processor performs before it terminates its search and returns any unexplored nodes to a master process. This limit is set by a critical budget parameter which determines the overhead of the process. In this paper we study the behaviour of the budget parameter on conditional Galton-Watson trees obtaining asymptotically tight bounds on this overhead. We present empirical results to show that this bound is surprisingly accurate in practice.
We describe a general technique for obtaining provably correct, non-blocking implementations of a large class of tree data structures where pointers are directed from parents to children. Updates are permitted to modify any contiguous portion of the tree atomically. Our non-blocking algorithms make use of the LLX, SCX and VLX primitives, which are multi-word generalizations of the standard LL, SC and VL primitives and have been implemented from single-word CAS. To illustrate our technique, we describe how it can be used in a fairly straightforward way to obtain a non-blocking implementation of a chromatic tree, which is a relaxed variant of a red-black tree. The height of the tree at any time is $O(c+ log n)$, where $n$ is the number of keys and $c$ is the number of updates in progress. We provide an experimental performance analysis which demonstrates that our Java implementation of a chromatic tree rivals, and often significantly outperforms, other leading concurrent dictionaries.
We show the $O(log n)$ time extract minimum function of efficient priority queues can be generalized to the extraction of the $k$ smallest elements in $O(k log(n/k))$ time, where we define $log(x)$ as $max(log_2(x), 1)$. We first show heap-ordered tree selection (Kaplan et al., SOSA 19) can be applied on the heap-ordered trees of the classic Fibonacci heap to support the extraction in $O(k log(n/k))$ amortized time. We then show selection is possible in a priority queue with optimal worst-case guarantees by applying heap-ordered tree selection on Brodal queues (SODA 96), supporting the operation in $O(k log(n/k))$ worst-case time. Via a reduction from the multiple selection problem, $Omega(k log(n/k))$ time is necessary if insertion is supported in $o(log n)$ time. We then apply the result to lazy search trees (Sandlund & Wild, FOCS 20), creating a new interval data structure based on selectable heaps. This gives optimal $O(B+n)$ time lazy search tree performance, lowering insertion complexity into a gap $Delta_i$ to $O(log(n/|Delta_i|))$ time. An $O(1)$ time merge operation is also made possible when used as a priority queue, among other situations. If Brodal queues are used, runtimes of the lazy search tree can be made worst-case in the general case of two-sided gaps. The presented data structure makes fundamental use of soft heaps (Chazelle, J. ACM 00), biased search trees, and efficient priority queues, approaching the theoretically-best data structure for ordered data.