ﻻ يوجد ملخص باللغة العربية
In decentralized optimization, multiple nodes in a network collaborate to minimize the sum of their local loss functions. The information exchange between nodes required for this task, is often limited by network connectivity. We consider a setting in which communication between nodes is hindered by both (i) a finite rate-constraint on the signal transmitted by any node, and (ii) additive noise corrupting the signal received by any node. We propose a novel algorithm for this scenario: Decentralized Lazy Mirror Descent with Differential Exchanges (DLMD-DiffEx), which guarantees convergence of the local estimates to the optimal solution under the given communication constraints. A salient feature of DLMD-DiffEx is the introduction of additional proxy variables that are maintained by the nodes to account for the disagreement in their estimates due to channel noise and rate-constraints. Convergence to the optimal solution is attained by having nodes iteratively exchange these disagreement terms until consensus is achieved. In order to prevent noise accumulation during this exchange, DLMD-DiffEx relies on two sequences; one controlling the power of the transmitted signal, and the other determining the consensus rate. We provide clear insights on the design of these two sequences which highlights the interplay between consensus rate and noise amplification. We investigate the performance of DLMD-DiffEx both from a theoretical perspective as well as through numerical evaluations.
The present work introduces the hybrid consensus alternating direction method of multipliers (H-CADMM), a novel framework for optimization over networks which unifies existing distributed optimization approaches, including the centralized and the dec
This technical note proposes the decentralized-partial-consensus optimization with inequality constraints, and a continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the obtained optimization
In this report, we study decentralized stochastic optimization to minimize a sum of smooth and strongly convex cost functions when the functions are distributed over a directed network of nodes. In contrast to the existing work, we use gradient track
In this paper we propose a novel consensus protocol for discrete-time multi-agent systems (MAS), which solves the dynamic consensus problem on the max value, i.e., the dynamic max-consensus problem. In the dynamic max-consensus problem to each agent
In this paper, we propose two communication-efficient algorithms for decentralized optimization over a multi-agent network with general directed network topology. In the first part, we consider a novel communication-efficient gradient tracking based