ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed algorithms for covering, packing and maximum weighted matching

181   0   0.0 ( 0 )
 نشر من قبل Neal E. Young
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper gives poly-logarithmic-round, distributed D-approximation algorithms for covering problems with submodular cost and monotone covering constraints (Submodular-cost Covering). The approximation ratio D is the maximum number of variables in any constraint. Special cases include Covering Mixed Integer Linear Programs (CMIP), and Weighted Vertex Cover (with D=2). Via duality, the paper also gives poly-logarithmic-round, distributed D-approximation algorithms for Fractional Packing linear programs (where D is the maximum number of constraints in which any variable occurs), and for Max Weighted c-Matching in hypergraphs (where D is the maximum size of any of the hyperedges; for graphs D=2). The paper also gives parallel (RNC) 2-approximation algorithms for CMIP with two variables per constraint and Weighted Vertex Cover. The algorithms are randomized. All of the approximation ratios exactly match those of comparable centralized algorithms.



قيم البحث

اقرأ أيضاً

212 - Neal E. Young 2014
We describe the first nearly linear-time approximation algorithms for explicitly given mixed packing/covering linear programs, and for (non-metric) fractional facility location. We also describe the first parallel algorithms requiring only near-linea r total work and finishing in polylog time. The algorithms compute $(1+epsilon)$-approximate solutions in time (and work) $O^*(N/epsilon^2)$, where $N$ is the number of non-zeros in the constraint matrix. For facility location, $N$ is the number of eligible client/facility pairs.
For over a decade now we have been witnessing the success of {em massive parallel computation} (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the {em maximum matching} problem---one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in $O(log{n})$ rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. showed that if each machine has $n^{1+Omega(1)}$ memory, this problem can also be solved $2$-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly $O(log{n})$ rounds once we enter the near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that perplexing possibility. That is, we break the above $O(log n)$ round complexity bound even in the case of {em slightly sublinear} memory per machine. In fact, our improvement here is {em almost exponential}: we are able to deliver a $(2+epsilon)$-approximation to maximum matching, for any fixed constant $epsilon>0$, in $O((log log n)^2)$ rounds.
164 - Soheil Behnezhad 2021
We present a near-tight analysis of the average query complexity -- `a la Nguyen and Onak [FOCS08] -- of the randomized greedy maximal matching algorithm, improving over the bound of Yoshida, Yamamoto and Ito [STOC09]. For any $n$-vertex graph of ave rage degree $bar{d}$, this leads to the following sublinear-time algorithms for estimating the size of maximum matching and minimum vertex cover, all of which are provably time-optimal up to logarithmic factors: $bullet$ A multiplicative $(2+epsilon)$-approximation in $widetilde{O}(n/epsilon^2)$ time using adjacency list queries. This (nearly) matches an $Omega(n)$ time lower bound for any multiplicative approximation and is, notably, the first $O(1)$-approximation that runs in $o(n^{1.5})$ time. $bullet$ A $(2, epsilon n)$-approximation in $widetilde{O}((bar{d} + 1)/epsilon^2)$ time using adjacency list queries. This (nearly) matches an $Omega(bar{d}+1)$ lower bound of Parnas and Ron [TCS07] which holds for any $(O(1), epsilon n)$-approximation, and improves over the bounds of [Yoshida et al. STOC09; Onak et al. SODA12] and [Kapralov et al. SODA20]: The former two take at least quadratic time in the degree which can be as large as $Omega(n^2)$ and the latter obtains a much larger approximation. $bullet$ A $(2, epsilon n)$-approximation in $widetilde{O}(n/epsilon^3)$ time using adjacency matrix queries. This (nearly) matches an $Omega(n)$ time lower bound in this model and improves over the $widetilde{O}(nsqrt{n})$-time $(2, epsilon n)$-approximate algorithm of [Chen, Kannan, and Khanna ICALP20]. It also turns out that any non-trivial multiplicative approximation in the adjacency matrix model requires $Omega(n^2)$ time, so the additive $epsilon n$ error is necessary too. As immediate corollaries, we get improved sublinear time estimators for (variants of) TSP and an improved AMPC algorithm for maximal matching.
111 - Piotr Sankowski 2017
Consider a planar graph $G=(V,E)$ with polynomially bounded edge weight function $w:Eto [0, poly(n)]$. The main results of this paper are NC algorithms for the following problems: - minimum weight perfect matching in $G$, - maximum cardinality an d maximum weight matching in $G$ when $G$ is bipartite, - maximum multiple-source multiple-sink flow in $G$ where $c:Eto [1, poly(n)]$ is a polynomially bounded edge capacity function, - minimum weight $f$-factor in $G$ where $f:Vto [1, poly(n)]$, - min-cost flow in $G$ where $c:Eto [1, poly(n)]$ is a polynomially bounded edge capacity function and $b:Vto [1, poly(n)]$ is a polynomially bounded vertex demand function. There have been no known NC algorithms for any of these problems previously (Before this and independent paper by Anari and Vazirani). In order to solve these problems we develop a new relatively simple but versatile framework that is combinatorial in spirit. It handles the combinatorial structure of matchings directly and needs to only know weights of appropriately defined matchings from algebraic subroutines.
In this paper we give fast distributed graph algorithms for detecting and listing small subgraphs, and for computing or approximating the girth. Our algorithms improve upon the state of the art by polynomial factors, and for girth, we obtain an const ant-time algorithm for additive +1 approximation in the Congested Clique, and the first parametrized algorithm for exact computation in CONGEST. In the Congested Clique, we develop a technique for learning small neighborhoods, and apply it to obtain an $O(1)$-round algorithm that computes the girth with only an additive +1 error. Next, we introduce a new technique (the partition tree technique) allowing for efficiently and deterministically listing all copies of any subgraph, improving upon the state-of the-art for non-dense graphs. We give two applications of this technique: First we show that for constant $k$, $C_{2k}$-detection can be solved in $O(1)$ rounds in the Congested Clique, improving on prior work which used matrix multiplication and had polynomial round complexity. Second, we show that in triangle-free graphs, the girth can be exactly computed in time polynomially faster than the best known bounds for general graphs. In CONGEST, we describe a new approach for finding cycles, and apply it in two ways: first we show a fast parametrized algorithm for girth with round complexity $tilde{O}(min(gcdot n^{1-1/Theta(g)},n))$ for any girth $g$; and second, we show how to find small even-length cycles $C_{2k}$ for $k = 3,4,5$ in $O(n^{1-1/k})$ rounds, which is a polynomial improvement upon the previous running times. Finally, using our improved $C_6$-freeness algorithm and the barrier on proving lower bounds on triangle-freeness of Eden et al., we show that improving the current $tildeOmega(sqrt{n})$ lower bound for $C_6$-freeness of Korhonen et al. by any polynomial factor would imply strong circuit complexity lower bounds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا