ﻻ يوجد ملخص باللغة العربية
Distributed descent-based methods are an essential toolset to solving optimization problems in multi-agent system scenarios. Here the agents seek to optimize a global objective function through mutual cooperation. Oftentimes, cooperation is achieved over a wireless communication network that is prone to delays and errors. There are many scenarios wherein the objective function is either non-differentiable or merely observable. In this paper, we present a cross-entropy based distributed stochastic approximation algorithm (SA) that finds a minimum of the objective, using only samples. We call this algorithm Decentralized Simultaneous Perturbation Stochastic Gradient, with Constant Sensitivity Parameters (DSPG). This algorithm is a two fold improvement over the classic Simultaneous Perturbation Stochastic Approximations (SPSA) algorithm. Specifically, DSPG allows for (i) the use of old information from other agents and (ii) easy implementation through the use simple hyper-parameter choices. We analyze the biases and variances that arise due to these two allowances. We show that the biases due to communication delays can be countered by a careful choice of algorithm hyper-parameters. The variance of the gradient estimator and its effect on the rate of convergence is studied. We present numerical results supporting our theory. Finally, we discuss an application to the stochastic consensus problem.
Despite the strong theoretical guarantees that variance-reduced finite-sum optimization algorithms enjoy, their applicability remains limited to cases where the memory overhead they introduce (SAG/SAGA), or the periodic full gradient computation they
We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) dont increase the stepsize too fast and 2) dont overstep the local curvature. No need for functional values, no line search, no information about the
In unconstrained optimisation on an Euclidean space, to prove convergence in Gradient Descent processes (GD) $x_{n+1}=x_n-delta _n abla f(x_n)$ it usually is required that the learning rates $delta _n$s are bounded: $delta _nleq delta $ for some pos
Conditional Stochastic Optimization (CSO) covers a variety of applications ranging from meta-learning and causal inference to invariant learning. However, constructing unbiased gradient estimates in CSO is challenging due to the composition structure
Epoch gradient descent method (a.k.a. Epoch-GD) proposed by Hazan and Kale (2011) was deemed a breakthrough for stochastic strongly convex minimization, which achieves the optimal convergence rate of $O(1/T)$ with $T$ iterative updates for the {it ob