ﻻ يوجد ملخص باللغة العربية
We propose a new distributed optimization algorithm for solving a class of constrained optimization problems in which (a) the objective function is separable (i.e., the sum of local objective functions of agents), (b) the optimization variables of distributed agents, which are subject to nontrivial local constraints, are coupled by global constraints, and (c) only noisy observations are available to estimate (the gradients of) local objective functions. In many practical scenarios, agents may not be willing to share their optimization variables with others. For this reason, we propose a distributed algorithm that does not require the agents to share their optimization variables with each other; instead, each agent maintains a local estimate of the global constraint functions and share the estimate only with its neighbors. These local estimates of constraint functions are updated using a consensus-type algorithm, while the local optimization variables of each agent are updated using a first-order method based on noisy estimates of gradient. We prove that, when the agents adopt the proposed algorithm, their optimization variables converge with probability 1 to an optimal point of an approximated problem based on the penalty method.
The paper proves convergence to global optima for a class of distributed algorithms for nonconvex optimization in network-based multi-agent settings. Agents are permitted to communicate over a time-varying undirected graph. Each agent is assumed to p
This paper considers a distributed convex optimization problem over a time-varying multi-agent network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and glo
This paper first proposes an N-block PCPM algorithm to solve N-block convex optimization problems with both linear and nonlinear constraints, with global convergence established. A linear convergence rate under the strong second-order conditions for
We consider continuous-time dynamics for distributed optimization with set constraints in the note. To handle the computational complexity of projection-based dynamics due to solving a general quadratic optimization subproblem with projection, we pro
One of the most widely used methods for solving large-scale stochastic optimization problems is distributed asynchronous stochastic gradient descent (DASGD), a family of algorithms that result from parallelizing stochastic gradient descent on distrib