Do you want to publish a course? Click here

Tight estimates for convergence of some non-stationary consensus algorithms

530   0   0.0 ( 0 )
 Publication date 2007
  fields
and research's language is English




Ask ChatGPT about the research

The present paper is devoted to estimating the speed of convergence towards consensus for a general class of discrete-time multi-agent systems. In the systems considered here, both the topology of the interconnection graph and the weight of the arcs are allowed to vary as a function of time. Under the hypothesis that some spanning tree structure is preserved along time, and that some nonzero minimal weight of the information transfer along this tree is guaranteed, an estimate of the contraction rate is given. The latter is expressed explicitly as the spectral radius of some matrix depending upon the tree depth and the lower bounds on the weights.



rate research

Read More

This paper addresses the robust consensus problem under switching topologies. Contrary to existing methods, the proposed approach provides decentralized protocols that achieve consensus for networked multi-agent systems in a predefined time. Namely, the protocol design provides a tuning parameter that allows setting the convergence time of the agents to a consensus state. An appropriate Lyapunov analysis exposes the capability of the current proposal to achieve predefined-time consensus over switching topologies despite the presence of bounded perturbations. Finally, the paper presents a comparison showing that the suggested approach subsumes existing fixed-time consensus algorithms and provides extra degrees of freedom to obtain predefined-time consensus protocols that are less over-engineered, i.e., the difference between the estimated convergence time and its actual value is lower in our approach. Numerical results are given to illustrate the effectiveness and advantages of the proposed approach.
In this paper, we revisit the convergence of the Heavy-ball method, and present improved convergence complexity results in the convex setting. We provide the first non-ergodic O(1/k) rate result of the Heavy-ball algorithm with constant step size for coercive objective functions. For objective functions satisfying a relaxed strongly convex condition, the linear convergence is established under weaker assumptions on the step size and inertial parameter than made in the existing literature. We extend our results to multi-block version of the algorithm with both the cyclic and stochastic update rules. In addition, our results can also be extended to decentralized optimization, where the ergodic analysis is not applicable.
The present paper considers leveraging network topology information to improve the convergence rate of ADMM for decentralized optimization, where networked nodes work collaboratively to minimize the objective. Such problems can be solved efficiently using ADMM via decomposing the objective into easier subproblems. Properly exploiting network topology can significantly improve the algorithm performance. Hybrid ADMM explores the direction of exploiting node information by taking into account node centrality but fails to utilize edge information. This paper fills the gap by incorporating both node and edge information and provides a novel convergence rate bound for decentralized ADMM that explicitly depends on network topology. Such a novel bound is attainable for certain class of problems, thus tight. The explicit dependence further suggests possible directions to optimal design of edge weights to achieve the best performance. Numerical experiments show that simple heuristic methods could achieve better performance, and also exhibits robustness to topology changes.
Many problems in science and engineering involve, as part of their solution process, the consideration of a separable function which is the sum of two convex functions, one of them possibly non-smooth. Recently a few works have discussed inexa
We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that, over the years, have spawned from Dropout (Hinton et al., 2012). Modeling that neurons in the brain may not fire, dropout algorithms consist in practice of multiplying the weight matrices of a NN component-wise by independently drawn random matrices with ${0,1}$-valued entries during each iteration of the Feedforward-Backpropagation algorithm. This paper presents a probability theoretical proof that for any NN topology and differentiable polynomially bounded activation functions, if we project the NNs weights into a compact set and use a dropout algorithm, then the weights converge to a unique stationary set of a projected system of Ordinary Differential Equations (ODEs). We also establish an upper bound on the rate of convergence of Gradient Descent (GD) on the limiting ODEs of dropout algorithms for arborescences (a class of trees) of arbitrary depth and with linear activation functions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا