ترغب بنشر مسار تعليمي؟ اضغط هنا

Convergence analysis of the discrete consensus-based optimization algorithm with random batch interactions and heterogeneous noises

71   0   0.0 ( 0 )
 نشر من قبل Doheon Kim Dr.
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We present stochastic consensus and convergence of the discrete consensus-based optimization (CBO) algorithm with random batch interactions and heterogeneous external noises. Despite the wide applications and successful performance in many practical simulations, the convergence of the discrete CBO algorithm was not rigorously investigated in such a generality. In this work, we introduce a generalized discrete CBO algorithm with a weighted representative point and random batch interactions, and show that the proposed discrete CBO algorithm exhibits stochastic consensus and convergence toward the common equilibrium state exponentially fast under suitable assumptions on system parameters. For this, we recast the given CBO algorithm with random batch interactions as a discrete consensus model with a random switching network topology, and then we use the mixing property of interactions over sufficiently long time interval to derive stochastic consensus and convergence estimates in mean square and almost sure senses. Our proposed analysis significantly improves earlier works on the convergence analysis of CBO models with full batch interactions and homogeneous external noises.

قيم البحث

اقرأ أيضاً

This technical note proposes the decentralized-partial-consensus optimization with inequality constraints, and a continuous-time algorithm based on multiple interconnected recurrent neural networks (RNNs) is derived to solve the obtained optimization problems. First, the partial-consensus matrix originating from Laplacian matrix is constructed to tackle the partial-consensus constraints. In addition, using the non-smooth analysis and Lyapunov-based technique, the convergence property about the designed algorithm is further guaranteed. Finally, the effectiveness of the obtained results is shown while several examples are presented.
Dual decomposition is widely utilized in distributed optimization of multi-agent systems. In practice, the dual decomposition algorithm is desired to admit an asynchronous implementation due to imperfect communication, such as time delay and packet d rop. In addition, computational errors also exist when individual agents solve their own subproblems. In this paper, we analyze the convergence of the dual decomposition algorithm in distributed optimization when both the asynchrony in communication and the inexactness in solving subproblems exist. We find that the interaction between asynchrony and inexactness slows down the convergence rate from $mathcal{O} ( 1 / k )$ to $mathcal{O} ( 1 / sqrt{k} )$. Specifically, with a constant step size, the value of objective function converges to a neighborhood of the optimal value, and the solution converges to a neighborhood of the exact optimal solution. Moreover, the violation of the constraints diminishes in $mathcal{O} ( 1 / sqrt{k} )$. Our result generalizes and unifies the existing ones that only consider either asynchrony or inexactness. Finally, numerical simulations validate the theoretical results.
96 - Kai Du , Qingxin Meng , 2020
This paper studies an infinite horizon optimal control problem for discrete-time linear systems and quadratic criteria, both with random parameters which are independent and identically distributed with respect to time. A classical approach is to sol ve an algebraic Riccati equation that involves mathematical expectations and requires certain statistical information of the parameters. In this paper, we propose an online iterative algorithm in the spirit of Q-learning for the situation where only one random sample of parameters emerges at each time step. The first theorem proves the equivalence of three properties: the convergence of the learning sequence, the well-posedness of the control problem, and the solvability of the algebraic Riccati equation. The second theorem shows that the adaptive feedback control in terms of the learning sequence stabilizes the system as long as the control problem is well-posed. Numerical examples are presented to illustrate our results.
We establish that the min-sum message-passing algorithm and its asynchronous variants converge for a large class of unconstrained convex optimization problems.
Mini-batch optimization has proven to be a powerful paradigm for large-scale learning. However, the state of the art parallel mini-batch algorithms assume synchronous operation or cyclic update orders. When worker nodes are heterogeneous (due to diff erent computational capabilities or different communication delays), synchronous and cyclic operations are inefficient since they will leave workers idle waiting for the slower nodes to complete their computations. In this paper, we propose an asynchronous mini-batch algorithm for regularized stochastic optimization problems with smooth loss functions that eliminates idle waiting and allows workers to run at their maximal update rates. We show that by suitably choosing the step-size values, the algorithm achieves a rate of the order $O(1/sqrt{T})$ for general convex regularization functions, and the rate $O(1/T)$ for strongly convex regularization functions, where $T$ is the number of iterations. In both cases, the impact of asynchrony on the convergence rate of our algorithm is asymptotically negligible, and a near-linear speedup in the number of workers can be expected. Theoretical results are confirmed in real implementations on a distributed computing infrastructure.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا