ترغب بنشر مسار تعليمي؟ اضغط هنا

Decentralized optimization over noisy, rate-constrained networks: Achieving consensus by communicating differences

69   0   0.0 ( 0 )
 نشر من قبل Rajarshi Saha
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In decentralized optimization, multiple nodes in a network collaborate to minimize the sum of their local loss functions. The information exchange between nodes required for this task, is often limited by network connectivity. We consider a setting in which communication between nodes is hindered by both (i) a finite rate-constraint on the signal transmitted by any node, and (ii) additive noise corrupting the signal received by any node. We propose a novel algorithm for this scenario: Decentralized Lazy Mirror Descent with Differential Exchanges (DLMD-DiffEx), which guarantees convergence of the local estimates to the optimal solution under the given communication constraints. A salient feature of DLMD-DiffEx is the introduction of additional proxy variables that are maintained by the nodes to account for the disagreement in their estimates due to channel noise and rate-constraints. Convergence to the optimal solution is attained by having nodes iteratively exchange these disagreement terms until consensus is achieved. In order to prevent noise accumulation during this exchange, DLMD-DiffEx relies on two sequences; one controlling the power of the transmitted signal, and the other determining the consensus rate. We provide clear insights on the design of these two sequences which highlights the interplay between consensus rate and noise amplification. We investigate the performance of DLMD-DiffEx both from a theoretical perspective as well as through numerical evaluations.

قيم البحث

اقرأ أيضاً

The present work introduces the hybrid consensus alternating direction method of multipliers (H-CADMM), a novel framework for optimization over networks which unifies existing distributed optimization approaches, including the centralized and the dec entralized consensus ADMM. H-CADMM provides a flexible tool that leverages the underlying graph topology in order to achieve a desirable sweet-spot between node-to-node communication overhead and rate of convergence -- thereby alleviating known limitations of both C-CADMM and D-CADMM. A rigorous analysis of the novel method establishes linear convergence rate, and also guides the choice of parameters to optimize this rate. The novel hybrid update rules of H-CADMM lend themselves to in-network acceleration that is shown to effect considerable -- and essentially free-of-charge -- performance boost over the fully decentralized ADMM. Comprehensive numerical tests validate the analysis and showcase the potential of the method in tackling efficiently, widely useful learning tasks.
This technical note proposes the decentralized-partial-consensus optimization with inequality constraints, and a continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the obtained optimization problems. First, the partial-consensus matrix originating from Laplacian matrix is constructed to tackle the partial-consensus constraints. In addition, using the non-smooth analysis and Lyapunov-based technique, the convergence property about the designed algorithm is further guaranteed. Finally, the effectiveness of the obtained results is shown while several examples are presented.
In this report, we study decentralized stochastic optimization to minimize a sum of smooth and strongly convex cost functions when the functions are distributed over a directed network of nodes. In contrast to the existing work, we use gradient track ing to improve certain aspects of the resulting algorithm. In particular, we propose the~textbf{texttt{S-ADDOPT}} algorithm that assumes a stochastic first-order oracle at each node and show that for a constant step-size~$alpha$, each node converges linearly inside an error ball around the optimal solution, the size of which is controlled by~$alpha$. For decaying step-sizes~$mathcal{O}(1/k)$, we show that~textbf{texttt{S-ADDOPT}} reaches the exact solution sublinearly at~$mathcal{O}(1/k)$ and its convergence is asymptotically network-independent. Thus the asymptotic behavior of~textbf{texttt{S-ADDOPT}} is comparable to the centralized stochastic gradient descent. Numerical experiments over both strongly convex and non-convex problems illustrate the convergence behavior and the performance comparison of the proposed algorithm.
In this paper we propose a novel consensus protocol for discrete-time multi-agent systems (MAS), which solves the dynamic consensus problem on the max value, i.e., the dynamic max-consensus problem. In the dynamic max-consensus problem to each agent is fed a an exogenous reference signal, the objective of each agent is to estimate the instantaneous and time-varying value of the maximum among the signals fed to the network, by exploiting only local and anonymous interactions among the agents. The absolute and relative tracking error of the proposed distributed control protocol is theoretically characterized and is shown to be bounded and by tuning its parameters it is possible to trade-off convergence time for steady-state error. The dynamic Max-consensus algorithm is then applied to solve the distributed size estimation problem in a dynamic setting where the size of the network is time-varying during the execution of the estimation algorithm. Numerical simulations are provided to corroborate the theoretical analysis.
371 - Zhuoqing Song , Lei Shi , Shi Pu 2021
In this paper, we propose two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed network topology. In the first part, we consider a novel communication-efficient gradient tracking based method, termed Compressed Push-Pull (CPP), which combines the Push-Pull method with communication compression. We show that CPP is applicable to a general class of unbiased compression operators and achieves linear convergence for strongly convex and smooth objective functions. In the second part, we propose a broadcast-like version of CPP (B-CPP), which also achieves linear convergence rate under the same conditions for the objective functions. B-CPP can be applied in an asynchronous broadcast setting and further reduce communication costs compared to CPP. Numerical experiments complement the theoretical analysis and confirm the effectiveness of the proposed methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا