ﻻ يوجد ملخص باللغة العربية
We analyze the performance of the alternating direction method of multipliers (ADMM) to track, in a decentralized manner, a solution of a stochastic sequence of optimization problems parametrized by a discrete time Markov process. The main advantage of considering a stochastic model is that we allow the objective functions to occasionally lose strong convexity and/or Lipschitz continuity of their gradients. Due to the stochastic nature of our model, the tracking statement is given in a mean square deviation error.
Decentralized optimization over time-varying graphs has been increasingly common in modern machine learning with massive data stored on millions of mobile devices, such as in federated learning. This paper revisits the widely used accelerated gradien
Communication compression techniques are of growing interests for solving the decentralized optimization problem under limited communication, where the global objective is to minimize the average of local cost functions over a multi-agent network usi
The present paper considers leveraging network topology information to improve the convergence rate of ADMM for decentralized optimization, where networked nodes work collaboratively to minimize the objective. Such problems can be solved efficiently
There is an increasing interest in designing differentiators, which converge exactly before a prespecified time regardless of the initial conditions, i.e., which are fixed-time convergent with a predefined Upper Bound of their Settling Time (UBST), d
The CASH problem has been widely studied in the context of automated configurations of machine learning (ML) pipelines and various solvers and toolkits are available. However, CASH solvers do not directly handle black-box constraints such as fairness