ترغب بنشر مسار تعليمي؟ اضغط هنا

Parallel Batch-Dynamic $k$-Core Decomposition

234   0   0.0 ( 0 )
 نشر من قبل Quanquan C. Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Maintaining a $k$-core decomposition quickly in a dynamic graph is an important problem in many applications, including social network analytics, graph visualization, centrality measure computations, and community detection algorithms. The main challenge for designing efficient $k$-core decomposition algorithms is that a single change to the graph can cause the decomposition to change significantly. We present the first parallel batch-dynamic algorithm for maintaining an approximate $k$-core decomposition that is efficient in both theory and practice. Given an initial graph with $m$ edges, and a batch of $B$ updates, our algorithm maintains a $(2 + delta)$-approximation of the coreness values for all vertices (for any constant $delta > 0$) in $O(Blog^2 m)$ amortized work and $O(log^2 m loglog m)$ depth (parallel time) with high probability. Our algorithm also maintains a low out-degree orientation of the graph in the same bounds. We implemented and experimentally evaluated our algorithm on a 30-core machine with two-way hyper-threading on $11$ graphs of varying densities and sizes. Compared to the state-of-the-art algorithms, our algorithm achieves up to a 114.52x speedup against the best multicore implementation and up to a 497.63x speedup against the best sequential algorithm, obtaining results for graphs that are orders-of-magnitude larger than those used in previous studies. In addition, we present the first approximate static $k$-core algorithm with linear work and polylogarithmic depth. We show that on a 30-core machine with two-way hyper-threading, our implementation achieves up to a 3.9x speedup in the static case over the previous state-of-the-art parallel algorithm.

قيم البحث

اقرأ أيضاً

In this paper, we study new batch-dynamic algorithms for the $k$-clique counting problem, which are dynamic algorithms where the updates are batches of edge insertions and deletions. We study this problem in the parallel setting, where the goal is to obtain algorithms with low (polylogarithmic) depth. Our first result is a new parallel batch-dynamic triangle counting algorithm with $O(Deltasqrt{Delta+m})$ amortized work and $O(log^* (Delta+m))$ depth with high probability, and $O(Delta+m)$ space for a batch of $Delta$ edge insertions or deletions. Our second result is an algebraic algorithm based on parallel fast matrix multiplication. Assuming that a parallel fast matrix multiplication algorithm exists with parallel matrix multiplication constant $omega_p$, the same algorithm solves dynamic $k$-clique counting with $Oleft(minleft(Delta m^{frac{(2k - 1)omega_p}{3(omega_p + 1)}}, (Delta+m)^{frac{2(k + 1)omega_p}{3(omega_p + 1)}}right)right)$ amortized work and $O(log (Delta+m))$ depth with high probability, and $Oleft((Delta+m)^{frac{2(k + 1)omega_p}{3(omega_p + 1)}}right)$ space. Using a recently developed parallel $k$-clique counting algorithm, we also obtain a simple batch-dynamic algorithm for $k$-clique counting on graphs with arboricity $alpha$ running in $O(Delta(m+Delta)alpha^{k-4})$ expected work and $O(log^{k-2} n)$ depth with high probability, and $O(m + Delta)$ space. Finally, we present a multicore CPU implementation of our parallel batch-dynamic triangle counting algorithm. On a 72-core machine with two-way hyper-threading, our implementation achieves 36.54--74.73x parallel speedup, and in certain cases achieves significant speedups over existing parallel algorithms for the problem, which are not theoretically-efficient.
This paper gives a new deterministic algorithm for the dynamic Minimum Spanning Forest (MSF) problem in the EREW PRAM model, where the goal is to maintain a MSF of a weighted graph with $n$ vertices and $m$ edges while supporting edge insertions and deletions. We show that one can solve the dynamic MSF problem using $O(sqrt n)$ processors and $O(log n)$ worst-case update time, for a total of $O(sqrt n log n)$ work. This improves on the work of Ferragina [IPPS 1995] which costs $O(log n)$ worst-case update time and $O(n^{2/3} log{frac{m}{n}})$ work.
Network decomposition is a central concept in the study of distributed graph algorithms. We present the first polylogarithmic-round deterministic distributed algorithm with small messages that constructs a strong-diameter network decomposition with p olylogarithmic parameters. Concretely, a ($C$, $D$) strong-diameter network decomposition is a partitioning of the nodes of the graph into disjoint clusters, colored with $C$ colors, such that neighboring clusters have different colors and the subgraph induced by each cluster has a diameter at most $D$. In the weak-diameter variant, the requirement is relaxed by measuring the diameter of each cluster in the original graph, instead of the subgraph induced by the cluster. A recent breakthrough of Rozhov{n} and Ghaffari [STOC 2020] presented the first $text{poly}(log n)$-round deterministic algorithm for constructing a weak-diameter network decomposition where $C$ and $D$ are both in $text{poly}(log n)$. Their algorithm uses small $O(log n)$-bit messages. One can transform their algorithm to a strong-diameter network decomposition algorithm with similar parameters. However, that comes at the expense of requiring unbounded messages. The key remaining qualitative question in the study of network decompositions was whether one can achieve a similar result for strong-diameter network decompositions using small messages. We resolve this question by presenting a novel technique that can transform any black-box weak-diameter network decomposition algorithm to a strong-diameter one, using small messages and with only moderate loss in the parameters.
Network decomposition is a central tool in distributed graph algorithms. We present two improvements on the state of the art for network decomposition, which thus lead to improvements in the (deterministic and randomized) complexity of several well-s tudied graph problems. - We provide a deterministic distributed network decomposition algorithm with $O(log^5 n)$ round complexity, using $O(log n)$-bit messages. This improves on the $O(log^7 n)$-round algorithm of Rozhov{n} and Ghaffari [STOC20], which used large messages, and their $O(log^8 n)$-round algorithm with $O(log n)$-bit messages. This directly leads to similar improvements for a wide range of deterministic and randomized distributed algorithms, whose solution relies on network decomposition, including the general distributed derandomization of Ghaffari, Kuhn, and Harris [FOCS18]. - One drawback of the algorithm of Rozhov{n} and Ghaffari, in the $mathsf{CONGEST}$ model, was its dependence on the length of the identifiers. Because of this, for instance, the algorithm could not be used in the shattering framework in the $mathsf{CONGEST}$ model. Thus, the state of the art randomized complexity of several problems in this model remained with an additive $2^{O(sqrt{loglog n})}$ term, which was a clear leftover of the older network decomposition complexity [Panconesi and Srinivasan STOC92]. We present a modified version that remedies this, constructing a decomposition whose quality does not depend on the identifiers, and thus improves the randomized round complexity for various problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا