ترغب بنشر مسار تعليمي؟ اضغط هنا

Noidy Conmunixatipn: On the Convergence of the Averaging Population Protocol

243   0   0.0 ( 0 )
 نشر من قبل Frederik Mallmann-Trenn
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study a process of emph{averaging} in a distributed system with emph{noisy communication}. Each of the agents in the system starts with some value and the goal of each agent is to compute the average of all the initial values. In each round, one pair of agents is drawn uniformly at random from the whole population, communicates with each other and each of these two agents updates their local value based on their own value and the received message. The communication is noisy and whenever an agent sends any value $v$, the receiving agent receives $v+N$, where $N$ is a zero-mean Gaussian random variable. The two quality measures of interest are (i) the total sum of squares $TSS(t)$, which measures the sum of square distances from the average load to the emph{initial average} and (ii) $bar{phi}(t)$, measures the sum of square distances from the average load to the emph{running average} (average at time $t$). It is known that the simple averaging protocol---in which an agent sends its current value and sets its new value to the average of the received value and its current value---converges eventually to a state where $bar{phi}(t)$ is small. It has been observed that $TSS(t)$, due to the noise, eventually diverges and previous research---mostly in control theory---has focused on showing eventual convergence w.r.t. the running average. We obtain the first probabilistic bounds on the convergence time of $bar{phi}(t)$ and precise bounds on the drift of $TSS(t)$ that show that albeit $TSS(t)$ eventually diverges, for a wide and interesting range of parameters, $TSS(t)$ stays small for a number of rounds that is polynomial in the number of agents. Our results extend to the synchronous setting and settings where the agents are restricted to discrete values and perform rounding.



قيم البحث

اقرأ أيضاً

Improving transaction throughput is an important challenge for Bitcoin. However, shortening the block generation interval or increasing the block size to improve throughput makes it sharing blocks within the network slower and increases the number of orphan blocks. Consequently, the security of the blockchain is sacrificed. To mitigate this, it is necessary to reduce the block propagation delay. Because of the contribution of new Bitcoin protocols and the improvements of the Internet, the block propagation delay in the Bitcoin network has been shortened in recent years. In this study, we identify impacts of compact block relay---an up-to-date Bitcoin protocol---and Internet improvement on the block propagation delay and fork rate in the Bitcoin network from 2015 to 2019. Existing measurement studies could not identify them but our simulation enables it. The experimental results reveal that compact block relay contributes to shortening the block propagation delay more than Internet improvements. The block propagation delay is reduced by 64.5% for the 50th percentile and 63.7% for the 90th percentile due to Internet improvements, and by 90.1% for the 50th percentile and by 87.6% for the 90th percentile due to compact block relay.
We study first-order optimization algorithms for computing the barycenter of Gaussian distributions with respect to the optimal transport metric. Although the objective is geodesically non-convex, Riemannian GD empirically converges rapidly, in fact faster than off-the-shelf methods such as Euclidean GD and SDP solvers. This stands in stark contrast to the best-known theoretical results for Riemannian GD, which depend exponentially on the dimension. In this work, we prove new geodesic convexity results which provide stronger control of the iterates, yielding a dimension-free convergence rate. Our techniques also enable the analysis of two related notions of averaging, the entropically-regularized barycenter and the geometric median, providing the first convergence guarantees for Riemannian GD for these problems.
We study the stochastic bilinear minimax optimization problem, presenting an analysis of the Stochastic ExtraGradient (SEG) method with constant step size, and presenting variations of the method that yield favorable convergence. We first note that t he last iterate of the basic SEG method only contracts to a fixed neighborhood of the Nash equilibrium, independent of the step size. This contrasts sharply with the standard setting of minimization where standard stochastic algorithms converge to a neighborhood that vanishes in proportion to the square-root (constant) step size. Under the same setting, however, we prove that when augmented with iteration averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure. In the interpolation setting, we achieve an optimal convergence rate up to tight constants. We present numerical experiments that validate our theoretical findings and demonstrate the effectiveness of the SEG method when equipped with iteration averaging and restarting.
In population protocols, the underlying distributed network consists of $n$ nodes (or agents), denoted by $V$, and a scheduler that continuously selects uniformly random pairs of nodes to interact. When two nodes interact, their states are updated by applying a state transition function that depends only on the states of the two nodes prior to the interaction. The efficiency of a population protocol is measured in terms of both time (which is the number of interactions until the nodes collectively have a valid output) and the number of possible states of nodes used by the protocol. By convention, we consider the parallel time cost, which is the time divided by $n$. In this paper we consider the majority problem, where each node receives as input a color that is either black or white, and the goal is to have all of the nodes output the color that is the majority of the input colors. We design a population protocol that solves the majority problem in $O(log^{3/2}n)$ parallel time, both with high probability and in expectation, while using $O(log n)$ states. Our protocol improves on a recent protocol of Berenbrink et al. that runs in $O(log^{5/3}n)$ parallel time, both with high probability and in expectation, using $O(log n)$ states.
142 - Zohir Bouzid 2009
We study the convergence problem in fully asynchronous, uni-dimensional robot networks that are prone to Byzantine (i.e. malicious) failures. In these settings, oblivious anonymous robots with arbitrary initial positions are required to eventually co nverge to an a apriori unknown position despite a subset of them exhibiting Byzantine behavior. Our contribution is twofold. We propose a deterministic algorithm that solves the problem in the most generic settings: fully asynchronous robots that operate in the non-atomic CORDA model. Our algorithm provides convergence in 5f+1-sized networks where f is the upper bound on the number of Byzantine robots. Additionally, we prove that 5f+1 is a lower bound whenever robot scheduling is fully asynchronous. This constrasts with previous results in partially synchronous robots networks, where 3f+1 robots are necessary and sufficient.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا