ﻻ يوجد ملخص باللغة العربية
In this paper, we consider the problem of distributed online convex optimization, where a group of agents collaborate to track the global minimizers of a sum of time-varying objective functions in an online manner. Specifically, we propose a novel distributed online gradient descent algorithm that relies on an online adaptation of the gradient tracking technique used in static optimization. We show that the dynamic regret bound of this algorithm has no explicit dependence on the time horizon and, therefore, can be tighter than existing bounds especially for problems with long horizons. Our bound depends on a new regularity measure that quantifies the total change in the gradients at the optimal points at each time instant. Furthermore, when the optimizer is approximatly subject to linear dynamics, we show that the dynamic regret bound can be further tightened by replacing the regularity measure that captures the path length of the optimizer with the accumulated prediction errors, which can be much lower in this special case. We present numerical experiments to corroborate our theoretical results.
In this paper, we consider the problem of distributed online convex optimization, where a network of local agents aim to jointly optimize a convex function over a period of multiple time steps. The agents do not have any information about the future.
In this work, we consider a distributed online convex optimization problem, with time-varying (potentially adversarial) constraints. A set of nodes, jointly aim to minimize a global objective function, which is the sum of local convex functions. The
This work addresses decentralized online optimization in non-stationary environments. A network of agents aim to track the minimizer of a global time-varying convex function. The minimizer evolves according to a known dynamics corrupted by an unknown
In this paper, we study decentralized online stochastic non-convex optimization over a network of nodes. Integrating a technique called gradient tracking in decentralized stochastic gradient descent, we show that the resulting algorithm, GT-DSGD, enj
Some of the most compelling applications of online convex optimization, including online prediction and classification, are unconstrained: the natural feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in this setting unless c