ﻻ يوجد ملخص باللغة العربية
Dual decomposition is widely utilized in distributed optimization of multi-agent systems. In practice, the dual decomposition algorithm is desired to admit an asynchronous implementation due to imperfect communication, such as time delay and packet drop. In addition, computational errors also exist when individual agents solve their own subproblems. In this paper, we analyze the convergence of the dual decomposition algorithm in distributed optimization when both the asynchrony in communication and the inexactness in solving subproblems exist. We find that the interaction between asynchrony and inexactness slows down the convergence rate from $mathcal{O} ( 1 / k )$ to $mathcal{O} ( 1 / sqrt{k} )$. Specifically, with a constant step size, the value of objective function converges to a neighborhood of the optimal value, and the solution converges to a neighborhood of the exact optimal solution. Moreover, the violation of the constraints diminishes in $mathcal{O} ( 1 / sqrt{k} )$. Our result generalizes and unifies the existing ones that only consider either asynchrony or inexactness. Finally, numerical simulations validate the theoretical results.
A collection of optimization problems central to power system operation requires distributed solution architectures to avoid the need for aggregation of all information at a central location. In this paper, we study distributed dual subgradient metho
This paper studies the distributed optimization problem where the objective functions might be nondifferentiable and subject to heterogeneous set constraints. Unlike existing subgradient methods, we focus on the case when the exact subgradients of th
Optimization in distributed networks plays a central role in almost all distributed machine learning problems. In principle, the use of distributed task allocation has reduced the computational time, allowing better response rates and higher data rel
In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential)
This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the