Do you want to publish a course? Click here

Gradient Clock Synchronization using Reference Broadcasts

164   0   0.0 ( 0 )
 Added by Fabian Kuhn
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

In this paper we suggest a method by which reference broadcast synchronization (RBS), and other methods of estimating clock values, can be incorporated in standard clock synchronization algorithms to improve synchronization quality. We advocate a logical separation of the task of estimating the clock values of other nodes in the network from the task of using these estimates to output a logical clock value. The separation is achieved by means of a virtual estimate graph, overlaid on top of the real network graph, which represents the information various nodes can obtain about each other. RBS estimates are represented in the estimate graph as edges between nodes at distance 2 from each other in the original network graph. A clock synchronization algorithm then operates on the estimate graph as though it were the original network. To illustrate the merits of this approach, we modify a recent optimal gradient clock synchronization algorithm to work in this setting. The modified algorithm transparently takes advantage of RBS estimates and any other means by which nodes can estimate each others clock values.



rate research

Read More

We study the problem of clock synchronization in highly dynamic networks, where communication links can appear or disappear at any time. The nodes in the network are equipped with hardware clocks, but the rate of the hardware clocks can vary arbitrarily within specific bounds, and the estimates that nodes can obtain about the clock values of other nodes are inherently inaccurate. Our goal in this setting is to output a logical clock at each node such that the logical clocks of any two nodes are not too far apart, and nodes that remain close to each other in the network for a long time are better synchronized than distant nodes. This property is called gradient clock synchronization. Gradient clock synchronization has been widely studied in the static setting, where the network topology does not change. We show that the asymptotically optimal bounds obtained for the static case also apply to our highly dynamic setting: if two nodes remain at distance $d$ from each other for sufficiently long, it is possible to upper bound the difference between their clock values by $O(d log (D / d))$, where $D$ is the diameter of the network. This is known to be optimal even for static networks. Furthermore, we show that our algorithm has optimal stabilization time: when a path of length $d$ appears between two nodes, the time required until the clock skew between the two nodes is reduced to $O(d log (D / d))$ is $O(D)$, which we prove to be optimal. Finally, the techniques employed for the more intricate analysis of the algorithm for dynamic graphs provide additional insights that are also of interest for the static setting. In particular, we establish self-stabilization of the gradient property within $O(D)$ time.
Clock synchronization and ranging over a wireless network with low communication overhead is a challenging goal with tremendous impact. In this paper, we study the use of time-to-digital converters in wireless sensors, which provides clock synchronization and ranging at negligible communication overhead through a sawtooth signal model for round trip times between two nodes. In particular, we derive Cram{e}r-Rao lower bounds for a linearitzation of the sawtooth signal model, and we thoroughly evaluate simple estimation techniques by simulation, giving clear and concise performance references for this technology.
We reduce the cost of communication and synchronization in graph processing by analyzing the fastest way to process graphs: pushing the updates to a shared state or pulling the updates to a private state.We investigate the applicability of this push-pull dichotomy to various algorithms and its impact on complexity, performance, and the amount of used locks, atomics, and reads/writes. We consider 11 graph algorithms, 3 programming models, 2 graph abstractions, and various families of graphs. The conducted analysis illustrates surprising differences between push and pull variants of different algorithms in performance, speed of convergence, and code complexity; the insights are backed up by performance data from hardware counters.We use these findings to illustrate which variant is faster for each algorithm and to develop generic strategies that enable even higher speedups. Our insights can be used to accelerate graph processing engines or libraries on both massively-parallel shared-memory machines as well as distributed-memory systems.
153 - Volker Turau 2020
In this work we extend the recently proposed synchronous broadcast algorithm amnesiac flooding to the case of intermittent communication channels. In amnesiac flooding a node forwards a received message in the subsequent round. There are several reasons that render an immediate forward of a message impossible: Higher priority traffic, overloaded channels, etc. We show that postponing the forwarding for one or more rounds prevents termination. Our extension overcomes this shortcoming while retaining the advantages of the algorithm: Nodes dont need to memorize the reception of a message to guarantee termination and messages are sent at most twice per edge. This extension allows to solve more general broadcast tasks such as multi-source broadcasts and concurrent broadcasts for systems with bounded channel capacities.
Understanding the bottlenecks in implementing stochastic gradient descent (SGD)-based distributed support vector machines (SVM) algorithm is important in training larger data sets. The communication time to do the model synchronization across the parallel processes is the main bottleneck that causes inefficiency in the training process. The model synchronization is directly affected by the mini-batch size of data processed before the global synchronization. In producing an efficient distributed model, the communication time in training model synchronization has to be as minimum as possible while retaining a high testing accuracy. The effect from model synchronization frequency over the convergence of the algorithm and accuracy of the generated model must be well understood to design an efficient distributed model. In this research, we identify the bottlenecks in model synchronization in parallel stochastic gradient descent (PSGD)-based SVM algorithm with respect to the training model synchronization frequency (MSF). Our research shows that by optimizing the MSF in the data sets that we used, a reduction of 98% in communication time can be gained (16x - 24x speed up) with respect to high-frequency model synchronization. The training model optimization discussed in this paper guarantees a higher accuracy than the sequential algorithm along with faster convergence.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا