Do you want to publish a course? Click here

Latency, Capacity, and Distributed MST

83   0   0.0 ( 0 )
 Added by Suman Sourav
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We study the cost of distributed MST construction in the setting where each edge has a latency and a capacity, along with the weight. Edge latencies capture the delay on the links of the communication network, while capacity captures their throughput (in this case, the rate at which messages can be sent). Depending on how the edge latencies relate to the edge weights, we provide several tight bounds on the time and messages required to construct an MST. When edge weights exactly correspond with the latencies, we show that, perhaps interestingly, the bottleneck parameter in determining the running time of an algorithm is the total weight $W$ of the MST (rather than the total number of nodes $n$, as in the standard CONGEST model). That is, we show a tight bound of $tilde{Theta}(D + sqrt{W/c})$ rounds, where $D$ refers to the latency diameter of the graph, $W$ refers to the total weight of the constructed MST and edges have capacity $c$. The proposed algorithm sends $tilde{O}(m+W)$ messages, where $m$, the total number of edges in the network graph under consideration, is a known lower bound on message complexity for MST construction. We also show that $Omega(W)$ is a lower bound for fast MST constructions. When the edge latencies and the corresponding edge weights are unrelated, and either can take arbitrary values, we show that (unlike the sub-linear time algorithms in the standard CONGEST model, on small diameter graphs), the best time complexity that can be achieved is $tilde{Theta}(D+n/c)$. However, if we restrict all edges to have equal latency $ell$ and capacity $c$ while having possibly different weights (weights could deviate arbitrarily from $ell$), we give an algorithm that constructs an MST in $tilde{O}(D + sqrt{nell/c})$ time. In each case, we provide nearly matching upper and lower bounds.

rate research

Read More

In this paper we show that approximation can help reduce the space used for self-stabilization. In the classic emph{state model}, where the nodes of a network communicate by reading the states of their neighbors, an important measure of efficiency is the space: the number of bits used at each node to encode the state. In this model, a classic requirement is that the algorithm has to be emph{silent}, that is, after stabilization the states should not change anymore. We design a silent self-stabilizing algorithm for the problem of minimum spanning tree, that has a trade-off between the quality of the solution and the space needed to compute it.
104 - Seth Gilbert , Lawrence Li 2020
Imagine a large graph that is being processed by a cluster of computers, e.g., described by the $k$-machine model or the Massively Parallel Computation Model. The graph, however, is not static; instead it is receiving a constant stream of updates. How fast can the cluster process the stream of updates? The fundamental question we want to ask in this paper is whether we can update the graph fast enough to keep up with the stream. We focus specifically on the problem of maintaining a minimum spanning tree (MST), and we give an algorithm for the $k$-machine model that can process $O(k)$ graph updates per $O(1)$ rounds with high probability. (And these results carry over to the Massively Parallel Computation (MPC) model.) We also show a lower bound, i.e., it is impossible to process $k^{1+epsilon}$ updates in $O(1)$ rounds. Thus we provide a nearly tight answer to the question of how fast a cluster can respond to a stream of graph modifications while maintaining an MST.
The increased use of micro-services to build web applications has spurred the rapid growth of Function-as-a-Service (FaaS) or serverless computing platforms. While FaaS simplifies provisioning and scaling for application developers, it introduces new challenges in resource management that need to be handled by the cloud provider. Our analysis of popular serverless workloads indicates that schedulers need to handle functions that are very short-lived, have unpredictable arrival patterns, and require expensive setup of sandboxes. The challenge of running a large number of such functions in a multi-tenant cluster makes existing scheduling frameworks unsuitable. We present Archipelago, a platform that enables low latency request execution in a multi-tenant serverless setting. Archipelago views each application as a DAG of functions, and every DAG in associated with a latency deadline. Archipelago achieves its per-DAG request latency goals by: (1) partitioning a given cluster into a number of smaller worker pools, and associating each pool with a semi-global scheduler (SGS), (2) using a latency-aware scheduler within each SGS along with proactive sandbox allocation to reduce overheads, and (3) using a load balancing layer to route requests for different DAGs to the appropriate SGS, and automatically scale the number of SGSs per DAG. Our testbed results show that Archipelago meets the latency deadline for more than 99% of realistic application request workloads, and reduces tail latencies by up to 36X compared to state-of-the-art serverless platforms.
A phaser is an expressive synchronization construct that unifies collective and point-to-point coordination with dynamic task parallelism. Each task can participate in a phaser as a signaler, a waiter, or both. The participants in a phaser may change over time as dynamic tasks are added and deleted. In this poster, we present a highly concurrent and scalable design of phasers for a distributed memory environment that is suitable for use with asynchronous partitioned global address space programming models. Our design for a distributed phaser employs a pair of skip lists augmented with the ability to collect and propagate synchronization signals. To enable a high degree of concurrency, addition and deletion of participant tasks are performed in two phases: a fast single-link-modify step followed by multiple hand-overhand lazy multi-link-modify steps. We show that the cost of synchronization and structural operations on a distributed phaser scales logarithmically, even in the presence of concurrent structural modifications. To verify the correctness of our design for distributed phasers, we employ the SPIN model checker. To address this issue of state space explosion, we describe how we decompose the state space to separately verify correct handling for different kinds of messages, which enables complete model checking of our phaser design.
Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا