ترغب بنشر مسار تعليمي؟ اضغط هنا

Superlinearly Convergent Asynchronous Distributed Network Newton Method

61   0   0.0 ( 0 )
 نشر من قبل Fatemeh Mansoori
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The problem of minimizing a sum of local convex objective functions over a networked system captures many important applications and has received much attention in the distributed optimization field. Most of existing work focuses on development of fast distributed algorithms under the presence of a central clock. The only known algorithms with convergence guarantees for this problem in asynchronous setup could achieve either sublinear rate under totally asynchronous setting or linear rate under partially asynchronous setting (with bounded delay). In this work, we built upon existing literature to develop and analyze an asynchronous Newton based approach for solving a penalized version of the problem. We show that this algorithm converges almost surely with global linear rate and local superlinear rate in expectation. Numerical studies confirm superior performance against other existing asynchronous methods.



قيم البحث

اقرأ أيضاً

Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newt on-type fast converging algorithm for solving network utility maximization problems with self-concordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition.
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a broad class of nonsmooth composite convex optimization problems without strong convexity assumptions. Based on advanced notions and techniques of variational anal ysis, we establish implementable results on the global convergence of the proposed algorithm as well as its local convergence with superlinear and quadratic rates. For certain structural problems, the obtained local convergence conditions do not require the local Lipschitz continuity of the corresponding Hessian mappings that is a crucial assumption used in the literature to ensure a superlinear convergence of other algorithms of the proximal Newton type. The conducted numerical experiments of solving the $l_1$ regularized logistic regression model illustrate the possibility of applying the proposed algorithm to deal with practically important problems.
In this work, we consider the asynchronous distributed optimization problem in which each node has its own convex cost function and can communicate directly only with its neighbors, as determined by a directed communication topology (directed graph o r digraph). First, we reformulate the optimization problem so that Alternating Direction Method of Multipliers (ADMM) can be utilized. Then, we propose an algorithm, herein called Asynchronous Approximate Distributed Alternating Direction Method of Multipliers (AsyAD-ADMM), using finite-time asynchronous approximate ratio consensus, to solve the multi-node convex optimization problem, in which every node performs iterative computations and exchanges information with its neighbors asynchronously. More specifically, at every iteration of AsyAD-ADMM, each node solves a local convex optimization problem for one of the primal variables and utilizes a finite-time asynchronous approximate consensus protocol to obtain the value of the other variable which is close to the optimal value, since the cost function for the second primal variable is not decomposable. If the individual cost functions are convex but not necessarily differentiable, the proposed algorithm converges at a rate of $mathcal{O}(1/k)$, where $k$ is the iteration counter. The efficacy of AsyAD-ADMM is exemplified via a proof-of-concept distributed least-square optimization problem with different performance-influencing factors investigated.
122 - Bahman Kalantari 2020
Newtons method for polynomial root finding is one of mathematics most well-known algorithms. The method also has its shortcomings: it is undefined at critical points, it could exhibit chaotic behavior and is only guaranteed to converge locally. Based on the {it Geometric Modulus Principle} for a complex polynomial $p(z)$, together with a {it Modulus Reduction Theorem} proved here, we develop the {it Robust Newtons method} (RNM), defined everywhere with a step-size that guarantees an {it a priori} reduction in polynomial modulus in each iteration. Furthermore, we prove RNM iterates converge globally, either to a root or a critical point. Specifically, given $varepsilon $ and any seed $z_0$, in $t=O(1/varepsilon^{2})$ iterations of RNM, independent of degree of $p(z)$, either $|p(z_t)| leq varepsilon$ or $|p(z_t) p(z_t)| leq varepsilon$. By adjusting the iterates at {it near-critical points}, we describe a {it modified} RNM that necessarily convergence to a root. In combination with Smales point estimation, RNM results in a globally convergent Newtons method having a locally quadratic rate. We present sample polynomiographs that demonstrate how in contrast with Newtons method RNM smooths out the fractal boundaries of basins of attraction of roots. RNM also finds potentials in computing all roots of arbitrary degree polynomials. A particular consequence of RNM is a simple algorithm for solving cubic equations.
139 - Ermin Wei , Asuman Ozdaglar 2013
We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constra ints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1/sqrt(k)) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O(1/k).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا