ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient Processing of Very Large Graphs in a Small Cluster

109   0   0.0 ( 0 )
 نشر من قبل Da Yan
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Inspired by the success of Googles Pregel, many systems have been developed recently for iterative computation over big graphs. These systems provide a user-friendly vertex-centric programming interface, where a programmer only needs to specify the behavior of one generic vertex when developing a parallel graph algorithm. However, most existing systems require the input graph to reside in memories of the machines in a cluster, and the few out-of-core systems suffer from problems such as poor efficiency for sparse computation workload, high demand on network bandwidth, and expensive cost incurred by external-memory join and group-by. In this paper, we introduce the GraphD system for a user to process very large graphs with ordinary computing resources. GraphD fully overlaps computation with communication, by streaming edges and messages on local disks, while transmitting messages in parallel. For a broad class of Pregel algorithms where message combiner is applicable, GraphD eliminates the need of any expensive external-memory join or group-by. These key techniques allow GraphD to achieve comparable performance to in-memory Pregel-like systems without keeping edges and messages in memories. We prove that to process a graph G=(V, E) with n machines using GraphD, each machine only requires O(|V|/n) memory space, allowing GraphD to scale to very large graphs with a small cluster. Extensive experiments show that GraphD beats existing out-of-core systems by orders of magnitude, and achieves comparable performance to in-memory systems running with enough memories.



قيم البحث

اقرأ أيضاً

As large graph processing emerges, we observe a costly fork-processing pattern (FPP) that is common in many graph algorithms. The unique feature of the FPP is that it launches many independent queries from different source vertices on the same graph. For example, an algorithm in analyzing the network community profile can execute Personalized PageRanks that start from tens of thousands of source vertices at the same time. We study the efficiency of handling FPPs in state-of-the-art graph processing systems on multi-core architectures. We find that those systems suffer from severe cache miss penalty because of the irregular and uncoordinated memory accesses in processing FPPs. In this paper, we propose ForkGraph, a cache-efficient FPP processing system on multi-core architectures. To improve the cache reuse, we divide the graph into partitions each sized of LLC capacity, and the queries in an FPP are buffered and executed on the partition basis. We further develop efficient intra- and inter-partition execution strategies for efficiency. For intra-partition processing, since the graph partition fits into LLC, we propose to execute each graph query with efficient sequential algorithms (in contrast with parallel algorithms in existing parallel graph processing systems) and present an atomic-free query processing by consolidating contending operations to cache-resident graph partition. For inter-partition processing, we propose yielding and priority-based scheduling, to reduce redundant work in processing. Besides, we theoretically prove that ForkGraph performs the same amount of work, to within a constant factor, as the fastest known sequential algorithms in FPP queries processing, which is work efficient. Our evaluations on real-world graphs show that ForkGraph significantly outperforms state-of-the-art graph processing systems with two orders of magnitude speedups.
The dispersion problem on graphs requires $k$ robots placed arbitrarily at the $n$ nodes of an anonymous graph, where $k leq n$, to coordinate with each other to reach a final configuration in which each robot is at a distinct node of the graph. The dispersion problem is important due to its relationship to graph exploration by mobile robots, scattering on a graph, and load balancing on a graph. In addition, an intrinsic application of dispersion has been shown to be the relocation of self-driven electric cars (robots) to recharge stations (nodes). We propose three efficient algorithms to solve dispersion on graphs. Our algorithms require $O(k log Delta)$ bits at each robot, and $O(m)$ steps running time, where $m$ is the number of edges and $Delta$ is the degree of the graph. The algorithms differ in whether they address the synchronous or the asynchronous system model, and in what, where, and how data structures are maintained.
Graphs are by nature unifying abstractions that can leverage interconnectedness to represent, explore, predict, and explain real- and digital-world phenomena. Although real users and consumers of graph instances and graph workloads understand these a bstractions, future problems will require new abstractions and systems. What needs to happen in the next decade for big graph processing to continue to succeed?
Attributed graphs model real networks by enriching their nodes with attributes accounting for properties. Several techniques have been proposed for partitioning these graphs into clusters that are homogeneous with respect to both semantic attributes and to the structure of the graph. However, time and space complexities of state of the art algorithms limit their scalability to medium-sized graphs. We propose SToC (for Semantic-Topological Clustering), a fast and scalable algorithm for partitioning large attributed graphs. The approach is robust, being compatible both with categorical and with quantitative attributes, and it is tailorable, allowing the user to weight the semantic and topological components. Further, the approach does not require the user to guess in advance the number of clusters. SToC relies on well known approximation techniques such as bottom-k sketches, traditional graph-theoretic concepts, and a new perspective on the composition of heterogeneous distance measures. Experimental results demonstrate its ability to efficiently compute high-quality partitions of large scale attributed graphs.
We show that the $(degree+1)$-list coloring problem can be solved deterministically in $O(D cdot log n cdotlog^2Delta)$ rounds in the CONGEST model, where $D$ is the diameter of the graph, $n$ the number of nodes, and $Delta$ the maximum degree. Usin g the recent polylogarithmic-time deterministic network decomposition algorithm by Rozhov{n} and Ghaffari [STOC 2020], this implies the first efficient (i.e., $polylog n$-time) deterministic CONGEST algorithm for the $(Delta+1)$-coloring and the $(mathit{degree}+1)$-list coloring problem. Previously the best known algorithm required $2^{O(sqrt{log n})}$ rounds and was not based on network decompositions. Our techniques also lead to deterministic $(mathit{degree}+1)$-list coloring algorithms for the congested clique and the massively parallel computation (MPC) model. For the congested clique, we obtain an algorithm with time complexity $O(logDeltacdotloglogDelta)$, for the MPC model, we obtain algorithms with round complexity $O(log^2Delta)$ for the linear-memory regime and $O(log^2Delta + log n)$ for the sublinear memory regime.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا