ترغب بنشر مسار تعليمي؟ اضغط هنا

Coarse-Grained Complexity for Dynamic Algorithms

98   0   0.0 ( 0 )
 نشر من قبل Thatchaphol Saranurak
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

To date, the only way to argue polynomial lower bounds for dynamic algorithms is via fine-grained complexity arguments. These arguments rely on strong assumptions about specific problems such as the Strong Exponential Time Hypothesis (SETH) and the Online Matrix-Vector Multiplication Conjecture (OMv). While they have led to many exciting discoveries, dynamic algorithms still miss out some benefits and lessons from the traditional ``coarse-grained approach that relates together classes of problems such as P and NP. In this paper we initiate the study of coarse-grained complexity theory for dynamic algorithms. Below are among questions that this theory can answer. What if dynamic Orthogonal Vector (OV) is easy in the cell-probe model? A research program for proving polynomial unconditional lower bounds for dynamic OV in the cell-probe model is motivated by the fact that many conditional lower bounds can be shown via reductions from the dynamic OV problem. Since the cell-probe model is more powerful than word RAM and has historically allowed smaller upper bounds, it might turn out that dynamic OV is easy in the cell-probe model, making this research direction infeasible. Our theory implies that if this is the case, there will be very interesting algorithmic consequences: If dynamic OV can be maintained in polylogarithmic worst-case update time in the cell-probe model, then so are several important dynamic problems such as $k$-edge connectivity, $(1+epsilon)$-approximate mincut, $(1+epsilon)$-approximate matching, planar nearest neighbors, Chans subset union and 3-vs-4 diameter. The same conclusion can be made when we replace dynamic OV by, e.g., subgraph connectivity, single source reachability, Chans subset union, and 3-vs-4 diameter. Lower bounds for $k$-edge connectivity via dynamic OV? (see the full abstract in the pdf file).



قيم البحث

اقرأ أيضاً

In this paper we study the fine-grained complexity of finding exact and approximate solutions to problems in P. Our main contribution is showing reductions from exact to approximate solution for a host of such problems. As one (notable) example, we show that the Closest-LCS-Pair problem (Given two sets of strings $A$ and $B$, compute exactly the maximum $textsf{LCS}(a, b)$ with $(a, b) in A times B$) is equivalent to its approximation version (under near-linear time reductions, and with a constant approximation factor). More generally, we identify a class of problems, which we call BP-Pair-Class, comprising both exact and approximate solutions, and show that they are all equivalent under near-linear time reductions. Exploring this class and its properties, we also show: $bullet$ Under the NC-SETH assumption (a significantly more relaxed assumption than SETH), solving any of the problems in this class requires essentially quadratic time. $bullet$ Modest improvements on the running time of known algorithms (shaving log factors) would imply that NEXP is not in non-uniform $textsf{NC}^1$. $bullet$ Finally, we leverage our techniques to show new barriers for deterministic approximation algorithms for LCS. At the heart of these new results is a deep connection between interactive proof systems for bounded-space computations and the fine-grained complexity of exact and approximate solutions to problems in P. In particular, our results build on the proof techniques from the classical IP = PSPACE result.
While operating communication networks adaptively may improve utilization and performance, frequent adjustments also introduce an algorithmic challenge: the re-optimization of traffic engineering solutions is time-consuming and may limit the granular ity at which a network can be adjusted. This paper is motivated by question whether the reactivity of a network can be improved by re-optimizing solutions dynamically rather than from scratch, especially if inputs such as link weights do not change significantly. This paper explores to what extent dynamic algorithms can be used to speed up fundamental tasks in network operations. We specifically investigate optimizations related to traffic engineering (namely shortest paths and maximum flow computations), but also consider spanning tree and matching applications. While prior work on dynamic graph algorithms focuses on link insertions and deletions, we are interested in the practical problem of link weight changes. We revisit existing upper bounds in the weight-dynamic model, and present several novel lower bounds on the amortized runtime for recomputing solutions. In general, we find that the potential performance gains depend on the application, and there are also strict limitations on what can be achieved, even if link weights change only slightly.
137 - Ketan D. Mulmuley 2012
We study a basic algorithmic problem in algebraic geometry, which we call NNL, of constructing a normalizing map as per Noethers Normalization Lemma. For general explicit varieties, as formally defined in this paper, we give a randomized polynomial-t ime Monte Carlo algorithm for this problem. For some interesting cases of explicit varieties, we give deterministic quasi-polynomial time algorithms. These may be contrasted with the standard EXPSPACE-algorithms for these problems in computational algebraic geometry. In particular, we show that: (1) The categorical quotient for any finite dimensional representation $V$ of $SL_m$, with constant $m$, is explicit in characteristic zero. (2) NNL for this categorical quotient can be solved deterministically in time quasi-polynomial in the dimension of $V$. (3) The categorical quotient of the space of $r$-tuples of $m times m$ matrices by the simultaneous conjugation action of $SL_m$ is explicit in any characteristic. (4) NNL for this categorical quotient can be solved deterministically in time quasi-polynomial in $m$ and $r$ in any characteristic $p$ not in $[2, m/2]$. (5) NNL for every explicit variety in zero or large enough characteristic can be solved deterministically in quasi-polynomial time, assuming the hardness hypothesis for the permanent in geometric complexity theory. The last result leads to a geometric complexity theory approach to put NNL for every explicit variety in P.
We develop a new framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to $textit{edit}$ a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then $textit{lift}$ the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, ($ell$-)Dominating Set, Edge ($ell$-)Dominating Set, and Connected Dominating Set. To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of several important graph classes (in some cases, also approximating the target parameter of the family). For bounded degeneracy, we obtain a bicriteria $(4,4)$-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria $(O(log^{1.5} n), O(sqrt{log w}))$-approximation, and for bounded pathwidth, we obtain a bicriteria $(O(log^{1.5} n), O(sqrt{log w} cdot log n))$-approximation. For treedepth $2$ (also related to bounded expansion), we obtain a $4$-approximation. We also prove complementary hardness-of-approximation results assuming $mathrm{P} eq mathrm{NP}$: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor ($2$ assuming UGC).
233 - Lei Shang 2017
This PhD thesis summarizes research works on the design of exact algorithms that provide a worst-case (time or space) guarantee for NP-hard scheduling problems. Both theoretical and practical aspects are considered with three main results reported. T he first one is about a Dynamic Programming algorithm which solves the F3Cmax problem in O*(3^n) time and space. The algorithm is easily generalized to other flowshop problems and single machine scheduling problems. The second contribution is about a search tree method called Branch & Merge which solves the 1||SumTi problem with the time complexity converging to O*(2^n) and in polynomial space. Our third contribution aims to improve the practical efficiency of exact search tree algorithms solving scheduling problems. First we realized that a better way to implement the idea of Branch & Merge is to use a technique called Memorization. By the finding of a new algorithmic paradox and the implementation of a memory cleaning strategy, the method succeeded to solve instances with 300 more jobs with respect to the state-of-the-art algorithm for the 1||SumTi problem. Then the treatment is extended to another three problems 1|ri|SumCi, 1|dtilde|SumwiCi and F2||SumCi. The results of the four problems all together show the power of the Memorization paradigm when applied on sequencing problems. We name it Branch & Memorize to promote a systematic consideration of Memorization as an essential building block in branching algorithms like Branch and Bound. The method can surely also be used to solve other problems, which are not necessarily scheduling problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا