ﻻ يوجد ملخص باللغة العربية
Imagine a large graph that is being processed by a cluster of computers, e.g., described by the $k$-machine model or the Massively Parallel Computation Model. The graph, however, is not static; instead it is receiving a constant stream of updates. How fast can the cluster process the stream of updates? The fundamental question we want to ask in this paper is whether we can update the graph fast enough to keep up with the stream. We focus specifically on the problem of maintaining a minimum spanning tree (MST), and we give an algorithm for the $k$-machine model that can process $O(k)$ graph updates per $O(1)$ rounds with high probability. (And these results carry over to the Massively Parallel Computation (MPC) model.) We also show a lower bound, i.e., it is impossible to process $k^{1+epsilon}$ updates in $O(1)$ rounds. Thus we provide a nearly tight answer to the question of how fast a cluster can respond to a stream of graph modifications while maintaining an MST.
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which *arbitrarily many* edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combina
We give an improved randomized CONGEST algorithm for distance-$2$ coloring that uses $Delta^2+1$ colors and runs in $O(log n)$ rounds, improving the recent $O(log Delta cdot log n)$-round algorithm in [Halldorsson, Kuhn, Maus; PODC 20]. We then impro
In this paper we show that approximation can help reduce the space used for self-stabilization. In the classic emph{state model}, where the nodes of a network communicate by reading the states of their neighbors, an important measure of efficiency is
In literature computer architectures are frequently claimed to be highly flexible, typically implying there exist trade-offs between flexibility and performance or energy efficiency. Processor flexibility, however, is not very sharply defined, and as
State-of-the-art summarization systems are trained and evaluated on massive datasets scraped from the web. Despite their prevalence, we know very little about the underlying characteristics (data noise, summarization complexity, etc.) of these datase