ترغب بنشر مسار تعليمي؟ اضغط هنا

Fine-Grained Completeness for Optimization in P

103   0   0.0 ( 0 )
 نشر من قبل Alejandro Cassis
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We initiate the study of fine-grained completeness theorems for exact and approximate optimization in the polynomial-time regime. Inspired by the first completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova, Williams, TALG 2019) as well as the classic class MaxSNP and MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis, JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain a number of natural optimization problems in P, including Maximum Inner Product, general forms of nearest neighbor search and optimization variants of the $k$-XOR problem. Specifically, we define MaxSP as the class of problems definable as $max_{x_1,dots,x_k} #{ (y_1,dots,y_ell) : phi(x_1,dots,x_k, y_1,dots,y_ell) }$, where $phi$ is a quantifier-free first-order property over a given relational structure (with MinSP defined analogously). On $m$-sized structures, we can solve each such problem in time $O(m^{k+ell-1})$. Our results are: - We determine (a sparse variant of) the Maximum/Minimum Inner Product problem as complete under *deterministic* fine-grained reductions: A strongly subquadratic algorithm for Maximum/Minimum Inner Product would beat the baseline running time of $O(m^{k+ell-1})$ for *all* problems in MaxSP/MinSP by a polynomial factor. - This completeness transfers to approximation: Maximum/Minimum Inner Product is also complete in the sense that a strongly subquadratic $c$-approximation would give a $(c+varepsilon)$-approximation for all MaxSP/MinSP problems in time $O(m^{k+ell-1-delta})$, where $varepsilon > 0$ can be chosen arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019), we obtain the perhaps surprising consequence that refuting the OV Hypothesis is *equivalent* to giving a $O(1)$-approximation for all MinSP problems in faster-than-$O(m^{k+ell-1})$ time.



قيم البحث

اقرأ أيضاً

167 - Mitali Bafna , Nikhil Vyas 2021
The problem of solving linear systems is one of the most fundamental problems in computer science, where given a satisfiable linear system $(A,b)$, for $A in mathbb{R}^{n times n}$ and $b in mathbb{R}^n$, we wish to find a vector $x in mathbb{R}^n$ s uch that $Ax = b$. The current best algorithms for solving dense linear systems reduce the problem to matrix multiplication, and run in time $O(n^{omega})$. We consider the problem of finding $varepsilon$-approximate solutions to linear systems with respect to the $L_2$-norm, that is, given a satisfiable linear system $(A in mathbb{R}^{n times n}, b in mathbb{R}^n)$, find an $x in mathbb{R}^n$ such that $||Ax - b||_2 leq varepsilon||b||_2$. Our main result is a fine-grained reduction from computing the rank of a matrix to finding $varepsilon$-approximate solutions to linear systems. In particular, if the best known $O(n^omega)$ time algorithm for computing the rank of $n times O(n)$ matrices is optimal (which we conjecture is true), then finding an $varepsilon$-approximate solution to a dense linear system also requires $tilde{Omega}(n^{omega})$ time, even for $varepsilon$ as large as $(1 - 1/text{poly}(n))$. We also prove (under some modified conjectures for the rank-finding problem) optimal hardness of approximation for sparse linear systems, linear systems over positive semidefinite matrices, well-conditioned linear systems, and approximately solving linear systems with respect to the $L_p$-norm, for $p geq 1$. At the heart of our results is a novel reduction from the rank problem to a decision version of the approximate linear systems problem. This reduction preserves properties such as matrix sparsity and bit complexity.
Combinatorial optimization problems arise in a wide range of applications from diverse domains. Many of these problems are NP-hard and designing efficient heuristics for them requires considerable time and experimentation. On the other hand, the numb er of optimization problems in the industry continues to grow. In recent years, machine learning techniques have been explored to address this gap. We propose a framework for leveraging machine learning techniques to scale-up exact combinatorial optimization algorithms. In contrast to the existing approaches based on deep-learning, reinforcement learning and restricted Boltzmann machines that attempt to directly learn the output of the optimization problem from its input (with limited success), our framework learns the relatively simpler task of pruning the elements in order to reduce the size of the problem instances. In addition, our framework uses only interpretable learning models based on intuitive features and thus the learning process provides deeper insights into the optimization problem and the instance class, that can be used for designing better heuristics. For the classical maximum clique enumeration problem, we show that our framework can prune a large fraction of the input graph (around 99 % of nodes in case of sparse graphs) and still detect almost all of the maximum cliques. This results in several fold speedups of state-of-the-art algorithms. Furthermore, the model used in our framework highlights that the chi-squared value of neighborhood degree has a statistically significant correlation with the presence of a node in a maximum clique, particularly in dense graphs which constitute a significant challenge for modern solvers. We leverage this insight to design a novel heuristic for this problem outperforming the state-of-the-art. Our heuristic is also of independent interest for maximum clique detection and enumeration.
This paper initiates the study of I/O algorithms (minimizing cache misses) from the perspective of fine-grained complexity (conditional polynomial lower bounds). Specifically, we aim to answer why sparse graph problems are so hard, and why the Longes t Common Subsequence problem gets a savings of a factor of the size of cache times the length of a cache line, but no more. We take the reductions and techniques from complexity and fine-grained complexity and apply them to the I/O model to generate new (conditional) lower bounds as well as faster algorithms. We also prove the existence of a time hierarchy for the I/O model, which motivates the fine-grained reductions. Using fine-grained reductions, we give an algorithm for distinguishing 2 vs. 3 diameter and radius that runs in $O(|E|^2/(MB))$ cache misses, which for sparse graphs improves over the previous $O(|V|^2/B)$ running time. We give new reductions from radius and diameter to Wiener index and median. We show meaningful reductions between problems that have linear-time solutions in the RAM model. The reductions use low I/O complexity (typically $O(n/B)$), and thus help to finely capture the relationship between I/O linear time $Theta(n/B)$ and RAM linear time $Theta(n)$. We generate new I/O assumptions based on the difficulty of improving sparse graph problem running times in the I/O model. We create conjectures that the current best known algorithms for Single Source Shortest Paths (SSSP), diameter, and radius are optimal. From these I/O-model assumptions, we show that many of the known reductions in the word-RAM model can naturally extend to hold in the I/O model as well (e.g., a lower bound on the I/O complexity of Longest Common Subsequence that matches the best known running time). Finally, we prove an analog of the Time Hierarchy Theorem in the I/O model.
Maintaining and updating shortest paths information in a graph is a fundamental problem with many applications. As computations on dense graphs can be prohibitively expensive, and it is preferable to perform the computations on a sparse skeleton of t he given graph that roughly preserves the shortest paths information. Spanners and emulators serve this purpose. This paper develops fast dynamic algorithms for sparse spanner and emulator maintenance and provides evidence from fine-grained complexity that these algorithms are tight. Under the popular OMv conjecture, we show that there can be no decremental or incremental algorithm that maintains an $n^{1+o(1)}$ edge (purely additive) $+n^{delta}$-emulator for any $delta<1/2$ with arbitrary polynomial preprocessing time and total update time $m^{1+o(1)}$. Also, under the Combinatorial $k$-Clique hypothesis, any fully dynamic combinatorial algorithm that maintains an $n^{1+o(1)}$ edge $(1+epsilon,n^{o(1)})$-spanner or emulator must either have preprocessing time $mn^{1-o(1)}$ or amortized update time $m^{1-o(1)}$. Both of our conditional lower bounds are tight. As the above fully dynamic lower bound only applies to combinatorial algorithms, we also develop an algebraic spanner algorithm that improves over the $m^{1-o(1)}$ update time for dense graphs. For any constant $epsilonin (0,1]$, there is a fully dynamic algorithm with worst-case update time $O(n^{1.529})$ that whp maintains an $n^{1+o(1)}$ edge $(1+epsilon,n^{o(1)})$-spanner. Our new algebraic techniques and spanner algorithms allow us to also obtain (1) a new fully dynamic algorithm for All-Pairs Shortest Paths (APSP) with update and path query time $O(n^{1.9})$; (2) a fully dynamic $(1+epsilon)$-approximate APSP algorithm with update time $O(n^{1.529})$; (3) a fully dynamic algorithm for near-$2$-approximate Steiner tree maintenance.
Computing the similarity of two point sets is a ubiquitous task in medical imaging, geometric shape comparison, trajectory analysis, and many more settings. Arguably the most basic distance measure for this task is the Hausdorff distance, which assig ns to each point from one set the closest point in the other set and then evaluates the maximum distance of any assigned pair. A drawback is that this distance measure is not translational invariant, that is, comparing two objects just according to their shape while disregarding their position in space is impossible. Fortunately, there is a canonical translational invariant version, the Hausdorff distance under translation, which minimizes the Hausdorff distance over all translations of one of the point sets. For point sets of size $n$ and $m$, the Hausdorff distance under translation can be computed in time $tilde O(nm)$ for the $L_1$ and $L_infty$ norm [Chew, Kedem SWAT92] and $tilde O(nm (n+m))$ for the $L_2$ norm [Huttenlocher, Kedem, Sharir DCG93]. As these bounds have not been improved for over 25 years, in this paper we approach the Hausdorff distance under translation from the perspective of fine-grained complexity theory. We show (i) a matching lower bound of $(nm)^{1-o(1)}$ for $L_1$ and $L_infty$ (and all other $L_p$ norms) assuming the Orthogonal Vectors Hypothesis and (ii) a matching lower bound of $n^{2-o(1)}$ for $L_2$ in the imbalanced case of $m = O(1)$ assuming the 3SUM Hypothesis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا