Do you want to publish a course? Click here

A Markovian Incremental Stochastic Subgradient Algorithm

70   0   0.0 ( 0 )
 Added by Rafael Massambone
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

A stochastic incremental subgradient algorithm for the minimization of a sum of convex functions is introduced. The method sequentially uses partial subgradient information and the sequence of partial subgradients is determined by a general Markov chain. This makes it suitable to be used in networks where the path of information flow is stochastically selected. We prove convergence of the algorithm to a weighted objective function where the weights are given by the Ces`aro limiting probability distribution of the Markov chain. Unlike previous works in the literature, the Ces`aro limiting distribution is general (not necessarily uniform), allowing for general weighted objective functions and flexibility in the method.



rate research

Read More

In this paper, we revisit a well-known distributed projected subgradient algorithm which aims to minimize a sum of cost functions with a common set constraint. In contrast to most of existing results, weight matrices of the time-varying multi-agent network are assumed to be more general, i.e., they are only required to be row stochastic instead of doubly stochastic. We focus on analyzing convergence properties of this algorithm under general graphs. We first show that there generally exists a graph sequence such that the algorithm is not convergent when the network switches freely within finitely many general graphs. Then to guarantee the convergence of this algorithm under any uniformly jointly strongly connected general graph sequence, we provide a necessary and sufficient condition, i.e., the intersection of optimal solution sets to all local optimization problems is not empty. Furthermore, we surprisingly find that the algorithm is convergent for any periodically switching general graph sequence, and the converged solution minimizes a weighted sum of local cost functions, where the weights depend on the Perron vectors of some product matrices of the underlying periodically switching graphs. Finally, we consider a slightly broader class of quasi-periodically switching graph sequences, and show that the algorithm is convergent for any quasi-periodic graph sequence if and only if the network switches between only two graphs.
We construct examples of Lipschitz continuous functions, with pathological subgradient dynamics both in continuous and discrete time. In both settings, the iterates generate bounded trajectories, and yet fail to detect any (generalized) critical points of the function.
50 - Hao Wu 2021
We establish a convergence theorem for a certain type of stochastic gradient descent, which leads to a convergent variant of the back-propagation algorithm
This paper proposes a new algorithm -- the underline{S}ingle-timescale Dounderline{u}ble-momentum underline{St}ochastic underline{A}pproxunderline{i}matiounderline{n} (SUSTAIN) -- for tackling stochastic unconstrained bilevel optimization problems. We focus on bilevel problems where the lower level subproblem is strongly-convex and the upper level objective function is smooth. Unlike prior works which rely on emph{two-timescale} or emph{double loop} techniques, we design a stochastic momentum-assisted gradient estimator for both the upper and lower level updates. The latter allows us to control the error in the stochastic gradient updates due to inaccurate solution to both subproblems. If the upper objective function is smooth but possibly non-convex, we show that {aname}~requires $mathcal{O}(epsilon^{-3/2})$ iterations (each using ${cal O}(1)$ samples) to find an $epsilon$-stationary solution. The $epsilon$-stationary solution is defined as the point whose squared norm of the gradient of the outer function is less than or equal to $epsilon$. The total number of stochastic gradient samples required for the upper and lower level objective functions matches the best-known complexity for single-level stochastic gradient algorithms. We also analyze the case when the upper level objective function is strongly-convex.
114 - Nan Chen , Xiang Ma , Yanchu Liu 2020
We use the technique of information relaxation to develop a duality-driven iterative approach to obtaining and improving confidence interval estimates for the true value of finite-horizon stochastic dynamic programming problems. We show that the sequence of dual value estimates yielded from the proposed approach in principle monotonically converges to the true value function in a finite number of dual iterations. Aiming to overcome the curse of dimensionality in various applications, we also introduce a regression-based Monte Carlo algorithm for implementation. The new approach can be used not only to assess the quality of heuristic policies, but also to improve them if we find that their duality gap is large. We obtain the convergence rate of our Monte Carlo method in terms of the amounts of both basis functions and the sampled states. Finally, we demonstrate the effectiveness of our method in an optimal order execution problem with market friction and in an inventory management problem in the presence of lost sale and lead time. Both examples are well known in the literature to be difficult to solve for optimality. The experiments show that our method can significantly improve the heuristics suggested in the literature and obtain new policies with a satisfactory performance guarantee.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا