ﻻ يوجد ملخص باللغة العربية
A stochastic incremental subgradient algorithm for the minimization of a sum of convex functions is introduced. The method sequentially uses partial subgradient information and the sequence of partial subgradients is determined by a general Markov chain. This makes it suitable to be used in networks where the path of information flow is stochastically selected. We prove convergence of the algorithm to a weighted objective function where the weights are given by the Ces`aro limiting probability distribution of the Markov chain. Unlike previous works in the literature, the Ces`aro limiting distribution is general (not necessarily uniform), allowing for general weighted objective functions and flexibility in the method.
In this paper, we revisit a well-known distributed projected subgradient algorithm which aims to minimize a sum of cost functions with a common set constraint. In contrast to most of existing results, weight matrices of the time-varying multi-agent n
We construct examples of Lipschitz continuous functions, with pathological subgradient dynamics both in continuous and discrete time. In both settings, the iterates generate bounded trajectories, and yet fail to detect any (generalized) critical points of the function.
We establish a convergence theorem for a certain type of stochastic gradient descent, which leads to a convergent variant of the back-propagation algorithm
This paper proposes a new algorithm -- the underline{S}ingle-timescale Dounderline{u}ble-momentum underline{St}ochastic underline{A}pproxunderline{i}matiounderline{n} (SUSTAIN) -- for tackling stochastic unconstrained bilevel optimization problems. W
We use the technique of information relaxation to develop a duality-driven iterative approach to obtaining and improving confidence interval estimates for the true value of finite-horizon stochastic dynamic programming problems. We show that the sequ