ترغب بنشر مسار تعليمي؟ اضغط هنا

Upper and Lower Bounds for Fully Retroactive Graph Problems

91   0   0.0 ( 0 )
 نشر من قبل Xiaowei Wu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Classic dynamic data structure problems maintain a data structure subject to a sequence S of updates and they answer queries using the latest version of the data structure, i.e., the data structure after processing the whole sequence. To handle operations that change the sequence S of updates, Demaine et al. (TALG 2007) introduced retroactive data structures. A retroactive operation modifies the update sequence S in a given position t, called time, and either creates or cancels an update in S at time t. A partially retroactive data structure restricts queries to be executed exclusively in the latest version of the data structure. A fully retroactive data structure supports queries at any time t: a query at time t is answered using only the updates of S up to time t. If the sequence S only consists of insertions, the resulting data structure is an incremental retroactive data structure. While efficient retroactive data structures have been proposed for classic data structures, e.g., stack, priority queue and binary search tree, the retroactive version of graph problems are rarely studied. In this paper we study retroactive graph problems including connectivity, minimum spanning forest (MSF), maximum degree, etc. We provide fully retroactive data structures for maintaining the maximum degree, connectivity and MSF in $tilde{O}(n)$ time per operation. We also give an algorithm for the incremental fully retroactive connectivity with $tilde{O}(1)$ time per operation. We compliment our algorithms with almost tight hardness results. We show that under the OMv conjecture (proposed by Henzinger et al. (STOC 2015)), there does not exist fully retroactive data structures maintaining connectivity or MSF, or incremental fully retroactive data structure maintaining the maximum degree with $O(n^{1-epsilon})$ time per operation, for any constant $epsilon > 0$.



قيم البحث

اقرأ أيضاً

The point placement problem is to determine the positions of a set of $n$ distinct points, P = {p1, p2, p3, ..., pn}, on a line uniquely, up to translation and reflection, from the fewest possible distance queries between pairs of points. Each distan ce query corresponds to an edge in a graph, called point placement graph ppg, whose vertex set is P. The uniqueness requirement of the placement translates to line rigidity of the ppg. In this paper we show how to construct in 2 rounds a line rigid point placement graph of size 9n/7 + O(1). This improves the existing best result of 4n/3 + O(1). We also improve the lower bound on 2-round algorithms from 17n/16 to 9n/8.
We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length $m$ and a substring of a longer text. We give both condit ional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an $O(m^{1/2-varepsilon})$ time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider.
We study the space complexity of sketching cuts and Laplacian quadratic forms of graphs. We show that any data structure which approximately stores the sizes of all cuts in an undirected graph on $n$ vertices up to a $1+epsilon$ error must use $Omega (nlog n/epsilon^2)$ bits of space in the worst case, improving the $Omega(n/epsilon^2)$ bound of Andoni et al. and matching the best known upper bound achieved by spectral sparsifiers. Our proof is based on a rigidity phenomenon for cut (and spectral) approximation which may be of independent interest: any two $d-$regular graphs which approximate each others cuts significantly better than a random graph approximates the complete graph must overlap in a constant fraction of their edges.
We study the quantum query complexity of two problems. First, we consider the problem of determining if a sequence of parentheses is a properly balanced one (a Dyck word), with a depth of at most $k$. We call this the $Dyck_{k,n}$ problem. We prove a lower bound of $Omega(c^k sqrt{n})$, showing that the complexity of this problem increases exponentially in $k$. Here $n$ is the length of the word. When $k$ is a constant, this is interesting as a representative example of star-free languages for which a surprising $tilde{O}(sqrt{n})$ query quantum algorithm was recently constructed by Aaronson et al. Their proof does not give rise to a general algorithm. When $k$ is not a constant, $Dyck_{k,n}$ is not context-free. We give an algorithm with $Oleft(sqrt{n}(log{n})^{0.5k}right)$ quantum queries for $Dyck_{k,n}$ for all $k$. This is better than the trival upper bound $n$ for $k=oleft(frac{log(n)}{loglog n}right)$. Second, we consider connectivity problems on grid graphs in 2 dimensions, if some of the edges of the grid may be missing. By embedding the balanced parentheses problem into the grid, we show a lower bound of $Omega(n^{1.5-epsilon})$ for the directed 2D grid and $Omega(n^{2-epsilon})$ for the undirected 2D grid. The directed problem is interesting as a black-box model for a class of classical dynamic programming strategies including the one that is usually used for the well-known edit distance problem. We also show a generalization of this result to more than 2 dimensions.
We consider the problem of testing graph cluster structure: given access to a graph $G=(V, E)$, can we quickly determine whether the graph can be partitioned into a few clusters with good inner conductance, or is far from any such graph? This is a ge neralization of the well-studied problem of testing graph expansion, where one wants to distinguish between the graph having good expansion (i.e. being a good single cluster) and the graph having a sparse cut (i.e. being a union of at least two clusters). A recent work of Czumaj, Peng, and Sohler (STOC15) gave an ingenious sublinear time algorithm for testing $k$-clusterability in time $tilde{O}(n^{1/2} text{poly}(k))$: their algorithm implicitly embeds a random sample of vertices of the graph into Euclidean space, and then clusters the samples based on estimates of Euclidean distances between the points. This yields a very efficient testing algorithm, but only works if the cluster structure is very strong: it is necessary to assume that the gap between conductances of accepted and rejected graphs is at least logarithmic in the size of the graph $G$. In this paper we show how one can leverage more refined geometric information, namely angles as opposed to distances, to obtain a sublinear time tester that works even when the gap is a sufficiently large constant. Our tester is based on the singular value decomposition of a natural matrix derived from random walk transition probabilities from a small sample of seed nodes. We complement our algorithm with a matching lower bound on the query complexity of testing clusterability. Our lower bound is based on a novel property testing problem, which we analyze using Fourier analytic tools. As a byproduct of our techniques, we also achieve new lower bounds for the problem of approximating MAX-CUT value in sublinear time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا