Do you want to publish a course? Click here

Fine-grained Complexity Meets IP = PSPACE

120   0   0.0 ( 0 )
 Added by Kaifeng Lyu
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In this paper we study the fine-grained complexity of finding exact and approximate solutions to problems in P. Our main contribution is showing reductions from exact to approximate solution for a host of such problems. As one (notable) example, we show that the Closest-LCS-Pair problem (Given two sets of strings $A$ and $B$, compute exactly the maximum $textsf{LCS}(a, b)$ with $(a, b) in A times B$) is equivalent to its approximation version (under near-linear time reductions, and with a constant approximation factor). More generally, we identify a class of problems, which we call BP-Pair-Class, comprising both exact and approximate solutions, and show that they are all equivalent under near-linear time reductions. Exploring this class and its properties, we also show: $bullet$ Under the NC-SETH assumption (a significantly more relaxed assumption than SETH), solving any of the problems in this class requires essentially quadratic time. $bullet$ Modest improvements on the running time of known algorithms (shaving log factors) would imply that NEXP is not in non-uniform $textsf{NC}^1$. $bullet$ Finally, we leverage our techniques to show new barriers for deterministic approximation algorithms for LCS. At the heart of these new results is a deep connection between interactive proof systems for bounded-space computations and the fine-grained complexity of exact and approximate solutions to problems in P. In particular, our results build on the proof techniques from the classical IP = PSPACE result.



rate research

Read More

To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional ``coarse-grained approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer. What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A research program for proving polynomial unconditional lower bounds for dynamic OV in the cell-probe model is motivated by the fact that many conditional lower bounds can be shown via reductions from the dynamic OV problem. Since the cell-probe model is more powerful than word RAM and has historically allowed smaller upper bounds, it might turn out that dynamic OV is easy in the cell-probe model, making this research direction infeasible. Our theory implies that if this is the case, there will be very interesting algorithmic consequences: If dynamic OV can be maintained in polylogarithmic worst-case update time in the cell-probe model, then so are several important dynamic problems such as $k$-edge connectivity, $(1+epsilon)$-approximate mincut, $(1+epsilon)$-approximate matching, planar nearest neighbors, Chans subset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, single source reachability, Chans subset union, and 3-vs-4 diameter. Lower bounds for $k$-edge connectivity via dynamic OV? (see the full abstract in the pdf file).
We revisit the complexity of online computation in the cell probe model. We consider a class of problems where we are first given a fixed pattern or vector $F$ of $n$ symbols and then one symbol arrives at a time in a stream. After each symbol has arrived we must output some function of $F$ and the $n$-length suffix of the arriving stream. Cell probe bounds of $Omega(deltalg{n}/w)$ have previously been shown for both convolution and Hamming distance in this setting, where $delta$ is the size of a symbol in bits and $winOmega(lg{n})$ is the cell size in bits. However, when $delta$ is a constant, as it is in many natural situations, these previous results no longer give us non-trivial bounds. We introduce a new lop-sided information transfer proof technique which enables us to prove meaningful lower bounds even for constant size input alphabets. We use our new framework to prove an amortised cell probe lower bound of $Omega(lg^2 n/(wcdot lg lg n))$ time per arriving bit for an online version of a well studied problem known as pattern matching with address errors. This is the first non-trivial cell probe lower bound for any online problem on bit streams that still holds when the cell sizes are large. We also show the same bound for online convolution conditioned on a new combinatorial conjecture related to Toeplitz matrices.
In this note we investigate the complexity of the Minimum Label Alignment problem and we show that such a problem is APX-hard.
Bobkov, Houdre, and the last author introduced a Poincare-type functional parameter, $lambda_infty$, of a graph $G$. They related $lambda_infty$ to the {em vertex expansion} of the graph via a Cheeger-type inequality, analogous to the inequality relating the spectral gap of the graph, $lambda_2$, to its {em edge expansion}. While $lambda_2$ can be computed efficiently, the computational complexity of $lambda_infty$ has remained an open question. Following the work of the second author with Raghavendra and Vempala, wherein the complexity of $lambda_infty$ was related to the so-called small-set expansion (SSE) problem, it has been believed that computing $lambda_infty$ is a hard problem. We confirm this conjecture by proving that computing $lambda_infty$ is indeed NP-hard, even for weighted trees. Our gadget further proves NP-hardness of computing emph{spread constant} of a weighted tree; i.e., a geometric measure of the graph, introduced by Alon, Boppana, and Spencer, in the context of deriving an asymptotic isoperimetric inequality of Cartesian products of graphs. We conclude this case by providing a fully polynomial time approximation scheme. We further study a generalization of spread constant in machine learning literature, namely the {em maximum variance embedding} problem. For trees, we provide fast combinatorial algorithms that avoid solving a semidefinite relaxation of the problem. On the other hand, for general graphs, we propose a randomized projection method that can outperform the optimal orthogonal projection, i.e., PCA, classically used for rounding of the optimum lifted solution (to SDP relaxation) of the problem.
We prove that Strings-and-Coins -- the combinatorial two-player game generalizing the dual of Dots-and-Boxes -- is strongly PSPACE-complete on multigraphs. This result improves the best previous result, NP-hardness, argued in Winning Ways. Our result also applies to the Nimstring variant, where the winner is determined by normal play; indeed, one step in our reduction is the standard reduction (also from Winning Ways) from Nimstring to Strings-and-Coins.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا