ترغب بنشر مسار تعليمي؟ اضغط هنا

A Massively Parallel Algorithm for Minimum Weight Vertex Cover

132   0   0.0 ( 0 )
 نشر من قبل Ce Jin
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a massively parallel algorithm, with near-linear memory per machine, that computes a $(2+varepsilon)$-approximation of minimum-weight vertex cover in $O(loglog d)$ rounds, where $d$ is the average degree of the input graph. Our result fills the key remaining gap in the state-of-the-art MPC algorithms for vertex cover and matching problems; two classic optimization problems, which are duals of each other. Concretely, a recent line of work---by Czumaj et al. [STOC18], Ghaffari et al. [PODC18], Assadi et al. [SODA19], and Gamlath et al. [PODC19]---provides $O(loglog n)$ time algorithms for $(1+varepsilon)$-approximate maximum weight matching as well as for $(2+varepsilon)$-approximate minimum cardinality vertex cover. However, the latter algorithm does not work for the general weighted case of vertex cover, for which the best known algorithm remained at $O(log n)$ time complexity.



قيم البحث

اقرأ أيضاً

93 - Yingli Ran , Zhao Zhang 2021
In a minimum cost submodular cover problem (MinSMC), given a monotone non-decreasing submodular function $fcolon 2^V rightarrow mathbb{Z}^+$, a cost function $c: Vrightarrow mathbb R^{+}$, an integer $kleq f(V)$, the goal is to find a subset $Asubset eq V$ with the minimum cost such that $f(A)geq k$. MinSMC has a lot of applications in machine learning and data mining. In this paper, we design a parallel algorithm for MinSMC which obtains a solution with approximation ratio at most $frac{H(min{Delta,k})}{1-5varepsilon}$ with probability $1-3varepsilon$ in $O(frac{log mlog nlog^2 mn}{varepsilon^4})$ rounds, where $Delta=max_{vin V}f(v)$, $H(cdot)$ is the Hamornic number, $n=f(V)$, $m=|V|$ and $varepsilon$ is a constant in $(0,frac{1}{5})$. This is the first paper obtaining a parallel algorithm for the weighted version of the MinSMC problem with an approximation ratio arbitrarily close to $H(min{Delta,k})$.
Reconfiguration schedules, i.e., sequences that gradually transform one solution of a problem to another while always maintaining feasibility, have been extensively studied. Most research has dealt with the decision problem of whether a reconfigurati on schedule exists, and the complexity of finding one. A prime example is the reconfiguration of vertex covers. We initiate the study of batched vertex cover reconfiguration, which allows to reconfigure multiple vertices concurrently while requiring that any adversarial reconfiguration order within a batch maintains feasibility. The latter provides robustness, e.g., if the simultaneous reconfiguration of a batch cannot be guaranteed. The quality of a schedule is measured by the number of batches until all nodes are reconfigured, and its cost, i.e., the maximum size of an intermediate vertex cover. To set a baseline for batch reconfiguration, we show that for graphs belonging to one of the classes ${mathsf{cycles, trees, forests, chordal, cactus, eventext{-}holetext{-}free, clawtext{-}free}}$, there are schedules that use $O(varepsilon^{-1})$ batches and incur only a $1+varepsilon$ multiplicative increase in cost over the best sequential schedules. Our main contribution is to compute such batch schedules in $O(varepsilon^{-1}log^* n)$ distributed time, which we also show to be tight. Further, we show that once we step out of these graph classes we face a very different situation. There are graph classes on which no efficient distributed algorithm can obtain the best (or almost best) existing schedule. Moreover, there are classes of bounded degree graphs which do not admit any reconfiguration schedules without incurring a large multiplicative increase in the cost at all.
Data structures that allow efficient distance estimation (distance oracles, distance sketches, etc.) have been extensively studied, and are particularly well studied in centralized models and classical distributed models such as CONGEST. We initiate their study in newer (and arguably more realistic) models of distributed computation: the Congested Clique model and the Massively Parallel Computation (MPC) model. We provide efficient constructions in both of these models, but our core results are for MPC. In MPC we give two main results: an algorithm that constructs stretch/space optimal distance sketches but takes a (small) polynomial number of rounds, and an algorithm that constructs distance sketches with worse stretch but that only takes polylogarithmic rounds. Along the way, we show that other useful combinatorial structures can also be computed in MPC. In particular, one key component we use to construct distance sketches are an MPC construction of the hopsets of Elkin and Neiman (2016). This result has additional applications such as the first polylogarithmic time algorithm for constant approximate single-source shortest paths for weighted graphs in the low memory MPC setting.
A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simulta neous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. While many fundamental graph problems admit efficient solutions in this model, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a polylog$(n)$ approximation requires essentially sending the entire input graph from each machine. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit randomized composable coresets of size $widetilde{O}(n)$ that yield an $widetilde{O}(1)$-approximate solution. This results in an $widetilde{O}(1)$-approximation simultaneous protocol for these problems with $widetilde{O}(nk)$ total communication when the input is randomly partitioned across $k$ machines. We further prove the optimality of our results. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication
Over the past decade, there has been increasing interest in distributed/parallel algorithms for processing large-scale graphs. By now, we have quite fast algorithms -- usually sublogarithmic-time and often $poly(loglog n)$-time, or even faster -- for a number of fundamental graph problems in the massively parallel computation (MPC) model. This model is a widely-adopted theoretical abstraction of MapReduce style settings, where a number of machines communicate in an all-to-all manner to process large-scale data. Contributing to this line of work on MPC graph algorithms, we present $poly(log k) in poly(loglog n)$ round MPC algorithms for computing $O(k^{1+{o(1)}})$-spanners in the strongly sublinear regime of local memory. To the best of our knowledge, these are the first sublogarithmic-time MPC algorithms for spanner construction. As primary applications of our spanners, we get two important implications, as follows: -For the MPC setting, we get an $O(log^2log n)$-round algorithm for $O(log^{1+o(1)} n)$ approximation of all pairs shortest paths (APSP) in the near-linear regime of local memory. To the best of our knowledge, this is the first sublogarithmic-time MPC algorithm for distance approximations. -Our result above also extends to the Congested Clique model of distributed computing, with the same round complexity and approximation guarantee. This gives the first sub-logarithmic algorithm for approximating APSP in weighted graphs in the Congested Clique model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا