ترغب بنشر مسار تعليمي؟ اضغط هنا

Upper and lower bounds for dynamic data structures on strings

93   0   0.0 ( 0 )
 نشر من قبل Kasper Green Larsen
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length $m$ and a substring of a longer text. We give both conditional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an $O(m^{1/2-varepsilon})$ time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider.



قيم البحث

اقرأ أيضاً

We study the quantum query complexity of two problems. First, we consider the problem of determining if a sequence of parentheses is a properly balanced one (a Dyck word), with a depth of at most $k$. We call this the $Dyck_{k,n}$ problem. We prove a lower bound of $Omega(c^k sqrt{n})$, showing that the complexity of this problem increases exponentially in $k$. Here $n$ is the length of the word. When $k$ is a constant, this is interesting as a representative example of star-free languages for which a surprising $tilde{O}(sqrt{n})$ query quantum algorithm was recently constructed by Aaronson et al. Their proof does not give rise to a general algorithm. When $k$ is not a constant, $Dyck_{k,n}$ is not context-free. We give an algorithm with $Oleft(sqrt{n}(log{n})^{0.5k}right)$ quantum queries for $Dyck_{k,n}$ for all $k$. This is better than the trival upper bound $n$ for $k=oleft(frac{log(n)}{loglog n}right)$. Second, we consider connectivity problems on grid graphs in 2 dimensions, if some of the edges of the grid may be missing. By embedding the balanced parentheses problem into the grid, we show a lower bound of $Omega(n^{1.5-epsilon})$ for the directed 2D grid and $Omega(n^{2-epsilon})$ for the undirected 2D grid. The directed problem is interesting as a black-box model for a class of classical dynamic programming strategies including the one that is usually used for the well-known edit distance problem. We also show a generalization of this result to more than 2 dimensions.
An assignment of colours to the vertices of a graph is stable if any two vertices of the same colour have identically coloured neighbourhoods. The goal of colour refinement is to find a stable colouring that uses a minimum number of colours. This is a widely used subroutine for graph isomorphism testing algorithms, since any automorphism needs to be colour preserving. We give an $O((m+n)log n)$ algorithm for finding a canonical version of such a stable colouring, on graphs with $n$ vertices and $m$ edges. We show that no faster algorithm is possible, under some modest assumptions about the type of algorithm, which captures all known colour refinement algorithms.
We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-condition ed distributions. Our main result is a nearly-tight lower bound of $widetilde{Omega}(kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results up to logarithmic factors and answering an open question of Chewi et. al. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.
Classic dynamic data structure problems maintain a data structure subject to a sequence S of updates and they answer queries using the latest version of the data structure, i.e., the data structure after processing the whole sequence. To handle opera tions that change the sequence S of updates, Demaine et al. (TALG 2007) introduced retroactive data structures. A retroactive operation modifies the update sequence S in a given position t, called time, and either creates or cancels an update in S at time t. A partially retroactive data structure restricts queries to be executed exclusively in the latest version of the data structure. A fully retroactive data structure supports queries at any time t: a query at time t is answered using only the updates of S up to time t. If the sequence S only consists of insertions, the resulting data structure is an incremental retroactive data structure. While efficient retroactive data structures have been proposed for classic data structures, e.g., stack, priority queue and binary search tree, the retroactive version of graph problems are rarely studied. In this paper we study retroactive graph problems including connectivity, minimum spanning forest (MSF), maximum degree, etc. We provide fully retroactive data structures for maintaining the maximum degree, connectivity and MSF in $tilde{O}(n)$ time per operation. We also give an algorithm for the incremental fully retroactive connectivity with $tilde{O}(1)$ time per operation. We compliment our algorithms with almost tight hardness results. We show that under the OMv conjecture (proposed by Henzinger et al. (STOC 2015)), there does not exist fully retroactive data structures maintaining connectivity or MSF, or incremental fully retroactive data structure maintaining the maximum degree with $O(n^{1-epsilon})$ time per operation, for any constant $epsilon > 0$.
Brand~ao and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension $n$ of the problem and the number $m$ of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with $mn$ when $mapprox n$, which is the same as classical.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا