Do you want to publish a course? Click here

Time-Space Trade-Offs for Longest Common Extensions

136   0   0.0 ( 0 )
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

We revisit the longest common extension (LCE) problem, that is, preprocess a string $T$ into a compact data structure that supports fast LCE queries. An LCE query takes a pair $(i,j)$ of indices in $T$ and returns the length of the longest common prefix of the suffixes of $T$ starting at positions $i$ and $j$. We study the time-space trade-offs for the problem, that is, the space used for the data structure vs. the worst-case time for answering an LCE query. Let $n$ be the length of $T$. Given a parameter $tau$, $1 leq tau leq n$, we show how to achieve either $O(infrac{n}{sqrt{tau}})$ space and $O(tau)$ query time, or $O(infrac{n}{tau})$ space and $O(tau log({|LCE(i,j)|}/{tau}))$ query time, where $|LCE(i,j)|$ denotes the length of the LCE returned by the query. These bounds provide the first smooth trade-offs for the LCE problem and almost match the previously known bounds at the extremes when $tau=1$ or $tau=n$. We apply the result to obtain improved bounds for several applications where the LCE problem is the computational bottleneck, including approximate string matching and computing palindromes. We also present an efficient technique to reduce LCE queries on two strings to one string. Finally, we give a lower bound on the time-space product for LCE data structures in the non-uniform cell probe model showing that our second trade-off is nearly optimal.



rate research

Read More

We consider the classic problem of computing the Longest Common Subsequence (LCS) of two strings of length $n$. While a simple quadratic algorithm has been known for the problem for more than 40 years, no faster algorithm has been found despite an extensive effort. The lack of progress on the problem has recently been explained by Abboud, Backurs, and Vassilevska Williams [FOCS15] and Bringmann and Kunnemann [FOCS15] who proved that there is no subquadratic algorithm unless the Strong Exponential Time Hypothesis fails. This has led the community to look for subquadratic approximation algorithms for the problem. Yet, unlike the edit distance problem for which a constant-factor approximation in almost-linear time is known, very little progress has been made on LCS, making it a notoriously difficult problem also in the realm of approximation. For the general setting, only a naive $O(n^{varepsilon/2})$-approximation algorithm with running time $tilde{O}(n^{2-varepsilon})$ has been known, for any constant $0 < varepsilon le 1$. Recently, a breakthrough result by Hajiaghayi, Seddighin, Seddighin, and Sun [SODA19] provided a linear-time algorithm that yields a $O(n^{0.497956})$-approximation in expectation; improving upon the naive $O(sqrt{n})$-approximation for the first time. In this paper, we provide an algorithm that in time $O(n^{2-varepsilon})$ computes an $tilde{O}(n^{2varepsilon/5})$-approximation with high probability, for any $0 < varepsilon le 1$. Our result (1) gives an $tilde{O}(n^{0.4})$-approximation in linear time, improving upon the bound of Hajiaghayi, Seddighin, Seddighin, and Sun, (2) provides an algorithm whose approximation scales with any subquadratic running time $O(n^{2-varepsilon})$, improving upon the naive bound of $O(n^{varepsilon/2})$ for any $varepsilon$, and (3) instead of only in expectation, succeeds with high probability.
We present time-space trade-offs for computing the Euclidean minimum spanning tree of a set $S$ of $n$ point-sites in the plane. More precisely, we assume that $S$ resides in a random-access memory that can only be read. The edges of the Euclidean minimum spanning tree $text{EMST}(S)$ have to be reported sequentially, and they cannot be accessed or modified afterwards. There is a parameter $s in {1, dots, n}$ so that the algorithm may use $O(s)$ cells of read-write memory (called the workspace) for its computations. Our goal is to find an algorithm that has the best possible running time for any given $s$ between $1$ and $n$. We show how to compute $text{EMST}(S)$ in $Obig((n^3/s^2)log s big)$ time with $O(s)$ cells of workspace, giving a smooth trade-off between the two best known bounds $O(n^3)$ for $s = 1$ and $O(n log n)$ for $s = n$. For this, we run Kruskals algorithm on the relative neighborhood graph (RNG) of $S$. It is a classic fact that the minimum spanning tree of $text{RNG}(S)$ is exactly $text{EMST}(S)$. To implement Kruskals algorithm with $O(s)$ cells of workspace, we define $s$-nets, a compact representation of planar graphs. This allows us to efficiently maintain and update the components of the current minimum spanning forest as the edges are being inserted.
We show that there are CNF formulas which can be refuted in resolution in both small space and small width, but for which any small-width proof must have space exceeding by far the linear worst-case upper bound. This significantly strengthens the space-width trade-offs in [Ben-Sasson 09]}, and provides one more example of trade-offs in the supercritical regime above worst case recently identified by [Razborov 16]. We obtain our results by using Razborovs new hardness condensation technique and combining it with the space lower bounds in [Ben-Sasson and Nordstrom 08].
In this work, we consider a variant of the classical Longest Common Subsequence problem called Doubly-Constrained Longest Common Subsequence (DC-LCS). Given two strings s1 and s2 over an alphabet A, a set C_s of strings, and a function Co from A to N, the DC-LCS problem consists in finding the longest subsequence s of s1 and s2 such that s is a supersequence of all the strings in Cs and such that the number of occurrences in s of each symbol a in A is upper bounded by Co(a). The DC-LCS problem provides a clear mathematical formulation of a sequence comparison problem in Computational Biology and generalizes two other constrained variants of the LCS problem: the Constrained LCS and the Repetition-Free LCS. We present two results for the DC-LCS problem. First, we illustrate a fixed-parameter algorithm where the parameter is the length of the solution. Secondly, we prove a parameterized hardness result for the Constrained LCS problem when the parameter is the number of the constraint strings and the size of the alphabet A. This hardness result also implies the parameterized hardness of the DC-LCS problem (with the same parameters) and its NP-hardness when the size of the alphabet is constant.
At CPM 2017, Castelli et al. define and study a new variant of the Longest Common Subsequence Problem, termed the Longest Filled Common Subsequence Problem (LFCS). For the LFCS problem, the input consists of two strings $A$ and $B$ and a multiset of characters $mathcal{M}$. The goal is to insert the characters from $mathcal{M}$ into the string $B$, thus obtaining a new string $B^*$, such that the Longest Common Subsequence (LCS) between $A$ and $B^*$ is maximized. Casteli et al. show that the problem is NP-hard and provide a 3/5-approximation algorithm for the problem. In this paper we study the problem from the experimental point of view. We introduce, implement and test new heuristic algorithms and compare them with the approximation algorithm of Casteli et al. Moreover, we introduce an Integer Linear Program (ILP) model for the problem and we use the state of the art ILP solver, Gurobi, to obtain exact solution for moderate sized instances.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا