ترغب بنشر مسار تعليمي؟ اضغط هنا

Consensus over evolutionary graphs

49   0   0.0 ( 0 )
 نشر من قبل Michalis Smyrnakis
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We establish average consensus on graphs with dynamic topologies prescribed by evolutionary games among strategic agents. Each agent possesses a private reward function and dynamically decides whether to create new links and/or whether to delete existing ones in a selfish and decentralized fashion, as indicated by a certain randomized mechanism. This model incurs a time-varying and state-dependent graph topology for which traditional consensus analysis is not applicable. We prove asymptotic average consensus almost surely and in mean square for any initial condition and graph topology. In addition, we establish exponential convergence in expectation. Our results are validated via simulation studies on random networks.



قيم البحث

اقرأ أيضاً

In this paper we study the distributed average consensus problem in multi-agent systems with directed communication links that are subject to quantized information flow. Specifically, we present and analyze a distributed averaging algorithm which ope rates exclusively with quantized values (i.e., the information stored, processed and exchanged between neighboring agents is subject to deterministic uniform quantization) and relies on event-driven updates (e.g., to reduce energy consumption, communication bandwidth, network congestion, and/or processor usage). The main idea of the proposed algorithm is that each node (i) models its initial state as two quantized fractions which have numerators equal to the nodes initial state and denominators equal to one, and (ii) transmits one fraction randomly while it keeps the other stored. Then, every time it receives one or more fractions, it averages their numerators with the numerator of the fraction it stored, and then transmits them to randomly selected out-neighbors. We characterize the properties of the proposed distributed algorithm and show that its execution, on any static and strongly connected digraph, allows each agent to reach in finite time a fixed state that is equal (within one quantisation level) to the average of the initial states. We extend the operation of the algorithm to achieve finite-time convergence in the presence of a dynamic directed communication topology subject to some connectivity conditions. Finally, we provide examples to illustrate the operation, performance, and potential advantages of the proposed algorithm. We compare against state-of-the-art quantized average consensus algorithms and show that our algorithms convergence speed significantly outperforms most existing protocols.
In decentralized optimization, multiple nodes in a network collaborate to minimize the sum of their local loss functions. The information exchange between nodes required for this task, is often limited by network connectivity. We consider a setting i n which communication between nodes is hindered by both (i) a finite rate-constraint on the signal transmitted by any node, and (ii) additive noise corrupting the signal received by any node. We propose a novel algorithm for this scenario: Decentralized Lazy Mirror Descent with Differential Exchanges (DLMD-DiffEx), which guarantees convergence of the local estimates to the optimal solution under the given communication constraints. A salient feature of DLMD-DiffEx is the introduction of additional proxy variables that are maintained by the nodes to account for the disagreement in their estimates due to channel noise and rate-constraints. Convergence to the optimal solution is attained by having nodes iteratively exchange these disagreement terms until consensus is achieved. In order to prevent noise accumulation during this exchange, DLMD-DiffEx relies on two sequences; one controlling the power of the transmitted signal, and the other determining the consensus rate. We provide clear insights on the design of these two sequences which highlights the interplay between consensus rate and noise amplification. We investigate the performance of DLMD-DiffEx both from a theoretical perspective as well as through numerical evaluations.
88 - Zhuoqing Song , Lei Shi , Shi Pu 2021
In this work, we consider the decentralized optimization problem in which a network of $n$ agents, each possessing a smooth and convex objective function, wish to collaboratively minimize the average of all the objective functions through peer-to-pee r communication in a directed graph. To solve the problem, we propose two accelerated Push-DIGing methods termed APD and APD-SC for minimizing non-strongly convex objective functions and strongly convex ones, respectively. We show that APD and APD-SC respectively converge at the rates $Oleft(frac{1}{k^2}right)$ and $Oleft(left(1 - Csqrt{frac{mu}{L}}right)^kright)$ up to constant factors depending only on the mixing matrix. To the best of our knowledge, APD and APD-SC are the first decentralized methods to achieve provable acceleration over unbalanced directed graphs. Numerical experiments demonstrate the effectiveness of both methods.
79 - Huan Li , Zhouchen Lin 2021
Decentralized optimization over time-varying graphs has been increasingly common in modern machine learning with massive data stored on millions of mobile devices, such as in federated learning. This paper revisits the widely used accelerated gradien t tracking and extends it to time-varying graphs. We prove the $O((frac{gamma}{1-sigma_{gamma}})^2sqrt{frac{L}{epsilon}})$ and $O((frac{gamma}{1-sigma_{gamma}})^{1.5}sqrt{frac{L}{mu}}logfrac{1}{epsilon})$ complexities for the practical single loop accelerated gradient tracking over time-varying graphs when the problems are nonstrongly convex and strongly convex, respectively, where $gamma$ and $sigma_{gamma}$ are two common constants charactering the network connectivity, $epsilon$ is the desired precision, and $L$ and $mu$ are the smoothness and strong convexity constants, respectively. Our complexities improve significantly over the ones of $O(frac{1}{epsilon^{5/7}})$ and $O((frac{L}{mu})^{5/7}frac{1}{(1-sigma)^{1.5}}logfrac{1}{epsilon})$, respectively, which were proved in the original literature only for static graphs, where $frac{1}{1-sigma}$ equals $frac{gamma}{1-sigma_{gamma}}$ when the network is time-invariant. When combining with a multiple consensus subroutine, the dependence on the network connectivity constants can be further improved to $O(1)$ and $O(frac{gamma}{1-sigma_{gamma}})$ for the computation and communication complexities, respectively. When the network is static, by employing the Chebyshev acceleration, our complexities exactly match the lower bounds without hiding any poly-logarithmic factor for both nonstrongly convex and strongly convex problems.
This paper studies linear stochastic approximation (SA) algorithms and their application to multi-agent systems in engineering and sociology. As main contribution, we provide necessary and sufficient conditions for convergence of linear SA algorithms to a deterministic or random final vector. We also characterize the system convergence rate, when the system is convergent. Moreover, differing from non-negative gain functions in traditional SA algorithms, this paper considers also the case when the gain functions are allowed to take arbitrary real numbers. Using our general treatment, we provide necessary and sufficient conditions to reach consensus and group consensus for first-order discrete-time multi-agent system over random signed networks and with state-dependent noise. Finally, we extend our results to the setting of multi-dimensional linear SA algorithms and characterize the behavior of the multi-dimensional Friedkin-Johnsen model over random interaction networks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا