Do you want to publish a course? Click here

Settling the relationship between Wilbers bounds for dynamic optimality

61   0   0.0 ( 0 )
 Added by Victor Lecomte
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In FOCS 1986, Wilber proposed two combinatorial lower bounds on the operational cost of any binary search tree (BST) for a given access sequence $X in [n]^m$. Both bounds play a central role in the ongoing pursuit of the dynamic optimality conjecture (Sleator and Tarjan, 1985), but their relationship remained unknown for more than three decades. We show that Wilbers Funnel bound dominates his Alternation bound for all $X$, and give a tight $Theta(lglg n)$ separation for some $X$, answering Wilbers conjecture and an open problem of Iacono, Demaine et. al. The main ingredient of the proof is a new symmetric characterization of Wilbers Funnel bound, which proves that it is invariant under rotations of $X$. We use this characterization to provide initial indication that the Funnel bound matches the Independent Rectangle bound (Demaine et al., 2009), by proving that when the Funnel bound is constant, $mathsf{IRB}_{diaguphspace{-.6em}square}$ is linear. To the best of our knowledge, our results provide the first progress on Wilbers conjecture that the Funnel bound is dynamically optimal (1986).

rate research

Read More

We prove a separation between offline and online algorithms for finger-based tournament heaps undergoing key modifications. These heaps are implemented by binary trees with keys stored on leaves, and intermediate nodes tracking the min of their respective subtrees. They represent a natural starting point for studying self-adjusting heaps due to the need to access the root-to-leaf path upon modifications. We combine previous studies on the competitive ratios of unordered binary search trees by [Fredman WADS2011] and on order-by-next request by [Martinez-Roura TCS2000] and [Munro ESA2000] to show that for any number of fingers, tournament heaps cannot handle a sequence of modify-key operations with competitive ratio in $o(sqrt{log{n}})$. Critical to this analysis is the characterization of the modifications that a heap can undergo upon an access. There are $exp(Theta(n log{n}))$ valid heaps on $n$ keys, but only $exp(Theta(n))$ binary search trees. We parameterize the modification power through the well-studied concept of fingers: additional pointers the data structure can manipulate arbitrarily. Here we demonstrate that fingers can be significantly more powerful than servers moving on a static tree by showing that access to $k$ fingers allow an offline algorithm to handle any access sequence with amortized cost $O(log_{k}(n) + 2^{lg^{*}n})$.
We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length $m$ and a substring of a longer text. We give both conditional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an $O(m^{1/2-varepsilon})$ time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider.
In the classic Integer Programming (IP) problem, the objective is to decide whether, for a given $m times n$ matrix $A$ and an $m$-vector $b=(b_1,dots, b_m)$, there is a non-negative integer $n$-vector $x$ such that $Ax=b$. Solving (IP) is an important step in numerous algorithms and it is important to obtain an understanding of the precise complexity of this problem as a function of natural parameters of the input. The classic pseudo-polynomial time algorithm of Papadimitriou [J. ACM 1981] for instances of (IP) with a constant number of constraints was only recently improved upon by Eisenbrand and Weismantel [SODA 2018] and Jansen and Rohwedder [ArXiv 2018]. We continue this line of work and show that under the Exponential Time Hypothesis (ETH), the algorithm of Jansen and Rohwedder is nearly optimal. We also show that when the matrix $A$ is assumed to be non-negative, a component of Papadimitrious original algorithm is already nearly optimal under ETH. This motivates us to pick up the line of research initiated by Cunningham and Geelen [IPCO 2007] who studied the complexity of solving (IP) with non-negative matrices in which the number of constraints may be unbounded, but the branch-width of the column-matroid corresponding to the constraint matrix is a constant. We prove a lower bound on the complexity of solving (IP) for such instances and obtain optimal results with respect to a closely related parameter, path-width. Specifically, we prove matching upper and lower bounds for (IP) when the path-width of the corresponding column-matroid is a constant.
Given a directed weighted graph $G=(V,E)$ undergoing vertex insertions emph{and} deletions, the All-Pairs Shortest Paths (APSP) problem asks to maintain a data structure that processes updates efficiently and returns after each update the distance matrix to the current version of $G$. In two breakthrough results, Italiano and Demetrescu [STOC 03] presented an algorithm that requires $tilde{O}(n^2)$ emph{amortized} update time, and Thorup showed in [STOC 05] that emph{worst-case} update time $tilde{O}(n^{2+3/4})$ can be achieved. In this article, we make substantial progress on the problem. We present the following new results: (1) We present the first deterministic data structure that breaks the $tilde{O}(n^{2+3/4})$ worst-case update time bound by Thorup which has been standing for almost 15 years. We improve the worst-case update time to $tilde{O}(n^{2+5/7}) = tilde{O}(n^{2.71..})$ and to $tilde{O}(n^{2+3/5}) = tilde{O}(n^{2.6})$ for unweighted graphs. (2) We present a simple deterministic algorithm with $tilde{O}(n^{2+3/4})$ worst-case update time ($tilde{O}(n^{2+2/3})$ for unweighted graphs), and a simple Las-Vegas algorithm with worst-case update time $tilde{O}(n^{2+2/3})$ ($tilde{O}(n^{2 + 1/2})$ for unweighted graphs) that works against a non-oblivious adversary. Both data structures require space $tilde{O}(n^2)$. These are the first exact dynamic algorithms with truly-subcubic update time emph{and} space usage. This makes significant progress on an open question posed in multiple articles [COCOON01, STOC 03, ICALP04, Encyclopedia of Algorithms 08] and is critical to algorithms in practice [TALG 06] where large space usage is prohibitive. Moreover, they match the worst-case update time of the best previous algorithms and the second algorithm improves upon a Monte-Carlo algorithm in a weaker adversary model with the same running time [SODA 17].
Filters (such as Bloom Filters) are data structures that speed up network routing and measurement operations by storing a compressed representation of a set. Filters are space efficient, but can make bounded one-sided errors: with tunable probability epsilon, they may report that a query element is stored in the filter when it is not. This is called a false positive. Recent research has focused on designing methods for dynamically adapting filters to false positives, reducing the number of false positives when some elements are queried repeatedly. Ideally, an adaptive filter would incur a false positive with bounded probability epsilon for each new query element, and would incur o(epsilon) total false positives over all repeated queries to that element. We call such a filter support optimal. In this paper we design a new Adaptive Cuckoo Filter and show that it is support optimal (up to additive logarithmic terms) over any n queries when storing a set of size n. Our filter is simple: fixing previous false positives requires a simple cuckoo operation, and the filter does not need to store any additional metadata. This data structure is the first practical data structure that is support optimal, and the first filter that does not require additional space to fix false positives. We complement these bounds with experiments showing that our data structure is effective at fixing false positives on network traces, outperforming previous Adaptive Cuckoo Filters. Finally, we investigate adversarial adaptivity, a stronger notion of adaptivity in which an adaptive adversary repeatedly queries the filter, using the result of previous queries to drive the false positive rate as high as possible. We prove a lower bound showing that a broad family of filters, including all known Adaptive Cuckoo Filters, can be forced by such an adversary to incur a large number of false positives.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا