ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximating Min-Mean-Cycle for low-diameter graphs in near-optimal time and memory

63   0   0.0 ( 0 )
 نشر من قبل Jason Altschuler
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We revisit Min-Mean-Cycle, the classical problem of finding a cycle in a weighted directed graph with minimum mean weight. Despite an extensive algorithmic literature, previous work falls short of a near-linear runtime in the number of edges $m$--in fact, there is a natural barrier which precludes such a runtime for solving Min-Mean-Cycle exactly. Here, we give a much faster approximation algorithm that, for graphs with polylogarithmic diameter, has near-linear runtime. In particular, this is the first algorithm whose runtime for the complete graph scales in the number of vertices $n$ as $tilde{O}(n^2)$. Moreover--unconditionally on the diameter--the algorithm uses only $O(n)$ memory beyond reading the input, making it memory-optimal. The algorithm is also simple to implement and has remarkable practical performance. Our approach is based on solving a linear programming (LP) relaxation using entropic regularization, which effectively reduces the LP to a Matrix Balancing problem--a la the popular reduction of Optimal Transport to Matrix Scaling. We then round the fractional LP solution using a variant of the classical Cycle-Cancelling algorithm that is sped up to near-linear runtime at the expense of being approximate, and implemented in a memory-optimal manner. We also provide an alternative algorithm with slightly faster theoretical runtime, albeit worse memory usage and practicality. This algorithm uses the same rounding procedure, but solves the LP relaxation by leveraging recent developments in area-convexity regularization. Its runtime scales inversely in the approximation accuracy, which we show is optimal--barring a major breakthrough in algorithmic graph theory, namely faster Shortest Paths algorithms.


قيم البحث

اقرأ أيضاً

97 - Hung Le , Shay Solomon 2021
Let $G = (V,E,w)$ be a weighted undirected graph on $|V| = n$ vertices and $|E| = m$ edges, let $k ge 1$ be any integer, and let $epsilon < 1$ be any parameter. We present the following results on fast constructions of spanners with near-optimal spar sity and lightness, which culminate a long line of work in this area. (By near-optimal we mean optimal under ErdH{o}s girth conjecture and disregarding the $epsilon$-dependencies.) - There are (deterministic) algorithms for constructing $(2k-1)(1+epsilon)$-spanners for $G$ with a near-optimal sparsity of $O(n^{1/k} log(1/epsilon)/epsilon))$. The first algorithm can be implemented in the pointer-machine model within time $O(malpha(m,n) log(1/epsilon)/epsilon) + SORT(m))$, where $alpha( , )$ is the two-parameter inverse-Ackermann function and $SORT(m)$ is the time needed to sort $m$ integers. The second algorithm can be implemented in the WORD RAM model within time $O(m log(1/epsilon)/epsilon))$. - There is a (deterministic) algorithm for constructing a $(2k-1)(1+epsilon)$-spanner for $G$ that achieves a near-optimal bound of $O(n^{1/k}mathrm{poly}(1/epsilon))$ on both sparsity and lightness. This algorithm can be implemented in the pointer-machine model within time $O(malpha(m,n) mathrm{poly}(1/epsilon) + SORT(m))$ and in the WORD RAM model within time $O(m alpha(m,n) mathrm{poly}(1/epsilon))$. The previous fastest constructions of $(2k-1)(1+epsilon)$-spanners with near-optimal sparsity incur a runtime of is $O(min{m(n^{1+1/k}) + nlog n,k n^{2+1/k}})$, even regardless of the lightness. Importantly, the greedy spanner for stretch $2k-1$ has sparsity $O(n^{1/k})$ -- with no $epsilon$-dependence whatsoever, but its runtime is $O(m(n^{1+1/k} + nlog n))$. Moreover, the state-of-the-art lightness bound of any $(2k-1)$-spanner is poor, even regardless of the sparsity and runtime.
Minimum-weight cut (min-cut) is a basic measure of a networks connectivity strength. While the min-cut can be computed efficiently in the sequential setting [Karger STOC96], there was no efficient way for a distributed network to compute its own min- cut without limiting the input structure or dropping the output quality: In the standard CONGEST model, existing algorithms with nearly-optimal time (e.g. [Ghaffari, Kuhn, DISC13; Nanongkai, Su, DISC14]) can guarantee a solution that is $(1+epsilon)$-approximation at best while the exact $tilde O(n^{0.8}D^{0.2} + n^{0.9})$-time algorithm [Ghaffari, Nowicki, Thorup, SODA20] works only on *simple* networks (no weights and no parallel edges). Here $n$ and $D$ denote the networks number of vertices and hop-diameter, respectively. For the weighted case, the best bound was $tilde O(n)$ [Daga, Henzinger, Nanongkai, Saranurak, STOC19]. In this paper, we provide an *exact* $tilde O(sqrt n + D)$-time algorithm for computing min-cut on *weighted* networks. Our result improves even the previous algorithm that works only on simple networks. Its time complexity matches the known lower bound up to polylogarithmic factors. At the heart of our algorithm are a clever routing trick and two structural lemmas regarding the structure of a minimum cut of a graph. These two structural lemmas considerably strengthen and generalize the framework of Mukhopadhyay-Nanongkai [STOC20] and can be of independent interest.
Let $G$ be a graph and $S, T subseteq V(G)$ be (possibly overlapping) sets of terminals, $|S|=|T|=k$. We are interested in computing a vertex sparsifier for terminal cuts in $G$, i.e., a graph $H$ on a smallest possible number of vertices, where $S c up T subseteq V(H)$ and such that for every $A subseteq S$ and $B subseteq T$ the size of a minimum $(A,B)$-vertex cut is the same in $G$ as in $H$. We assume that our graphs are unweighted and that terminals may be part of the min-cut. In previous work, Kratsch and Wahlstrom (FOCS 2012/JACM 2020) used connections to matroid theory to show that a vertex sparsifier $H$ with $O(k^3)$ vertices can be computed in randomized polynomial time, even for arbitrary digraphs $G$. However, since then, no improvements on the size $O(k^3)$ have been shown. In this paper, we draw inspiration from the renowned Bollobass Two-Families Theorem in extremal combinatorics and introduce the use of total orderings into Kratsch and Wahlstroms methods. This new perspective allows us to construct a sparsifier $H$ of $Theta(k^2)$ vertices for the case that $G$ is a DAG. We also show how to compute $H$ in time near-linear in the size of $G$, improving on the previous $O(n^{omega+1})$. Furthermore, $H$ recovers the closest min-cut in $G$ for every partition $(A,B)$, which was not previously known. Finally, we show that a sparsifier of size $Omega(k^2)$ is required, both for DAGs and for undirected edge cuts.
We give an $n^{2+o(1)}$-time algorithm for finding $s$-$t$ min-cuts for all pairs of vertices $s$ and $t$ in a simple, undirected graph on $n$ vertices. We do so by constructing a Gomory-Hu tree (or cut equivalent tree) in the same running time, ther eby improving on the recent bound of $tilde{O}(n^{2.5})$ by Abboud et al. (FOCS 2021). Our running time is nearly optimal as a function of $n$.
The diameter, radius and eccentricities are natural graph parameters. While these problems have been studied extensively, there are no known dynamic algorithms for them beyond the ones that follow from trivial recomputation after each update or from solving dynamic All-Pairs Shortest Paths (APSP), which is very computationally intensive. This is the situation for dynamic approximation algorithms as well, and even if only edge insertions or edge deletions need to be supported. This paper provides a comprehensive study of the dynamic approximation of Diameter, Radius and Eccentricities, providing both conditional lower bounds, and new algorithms whose bounds are optimal under popular hypotheses in fine-grained complexity. Some of the highlights include: - Under popular hardness hypotheses, there can be no significantly better fully dynamic approximation algorithms than recomputing the answer after each update, or maintaining full APSP. - Nearly optimal partially dynamic (incremental/decremental) algorithms can be achieved via efficient reductions to (incremental/decremental) maintenance of Single-Source Shortest Paths. For instance, a nearly $(3/2+epsilon)$-approximation to Diameter in directed or undirected graphs can be maintained decrementally in total time $m^{1+o(1)}sqrt{n}/epsilon^2$. This nearly matches the static $3/2$-approximation algorithm for the problem that is known to be conditionally optimal.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا