ﻻ يوجد ملخص باللغة العربية
The problem of minimizing a sum of local convex objective functions over a networked system captures many important applications and has received much attention in the distributed optimization field. Most of existing work focuses on development of fast distributed algorithms under the presence of a central clock. The only known algorithms with convergence guarantees for this problem in asynchronous setup could achieve either sublinear rate under totally asynchronous setting or linear rate under partially asynchronous setting (with bounded delay). In this work, we built upon existing literature to develop and analyze an asynchronous Newton based approach for solving a penalized version of the problem. We show that this algorithm converges almost surely with global linear rate and local superlinear rate in expectation. Numerical studies confirm superior performance against other existing asynchronous methods.
Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newt
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a broad class of nonsmooth composite convex optimization problems without strong convexity assumptions. Based on advanced notions and techniques of variational anal
In this work, we consider the asynchronous distributed optimization problem in which each node has its own convex cost function and can communicate directly only with its neighbors, as determined by a directed communication topology (directed graph o
Newtons method for polynomial root finding is one of mathematics most well-known algorithms. The method also has its shortcomings: it is undefined at critical points, it could exhibit chaotic behavior and is only guaranteed to converge locally. Based
We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constra