Do you want to publish a course? Click here

On the metric resolvent: nonexpansiveness, convergence rates and applications

125   0   0.0 ( 0 )
 Added by Feng Xue
 Publication date 2021
  fields
and research's language is English
 Authors Feng Xue




Ask ChatGPT about the research

In this paper, we study the nonexpansive properties of metric resolvent, and present a convergence rate analysis for the associated fixed-point iterations (Banach-Picard and Krasnoselskii-Mann types). Equipped with a variable metric, we develop the global ergodic and non-ergodic iteration-complexity bounds in terms of both solution distance and objective value. A byproduct of our expositions also extends the proximity operator and Moreaus decomposition identity to arbitrary variable metric. It is further shown that many classes of the first-order operator splitting algorithms, including alternating direction methods of multipliers, primal-dual hybrid gradient and Bregman iterations, can be expressed by the fixed-point iterations of a simple metric resolvent, and thus, the convergence can be analyzed within this unified framework.

rate research

Read More

96 - Feng Xue 2021
In this paper, we consider a generalized forward-backward splitting (G-FBS) operator for solving the monotone inclusions, and analyze its nonexpansive properties in a context of arbitrary variable metric. Then, for the associated fixed-point iterations (i.e. the G-FBS algorithms), the global ergodic and pointwise convergence rates of metric distance are obtained from the nonexpansiveness. The convergence in terms of objective function value is also investigated, when the G-FBS operator is applied to a minimization problem. A main contribution of this paper is to show that the G-FBS operator provides a simplifying and unifying framework to model and analyze a great variety of operator splitting algorithms, where the convergence behaviours can be easily described by the fixed-point construction of this simple operator.
Motivated by broad applications in machine learning, we study the popular accelerated stochastic gradient descent (ASGD) algorithm for solving (possibly nonconvex) optimization problems. We characterize the finite-time performance of this method when the gradients are sampled from Markov processes, and hence biased and dependent from time step to time step; in contrast, the analysis in existing work relies heavily on the stochastic gradients being independent and sometimes unbiased. Our main contributions show that under certain (standard) assumptions on the underlying Markov chain generating the gradients, ASGD converges at the nearly the same rate with Markovian gradient samples as with independent gradient samples. The only difference is a logarithmic factor that accounts for the mixing time of the Markov chain. One of the key motivations for this study are complicated control problems that can be modeled by a Markov decision process and solved using reinforcement learning. We apply the accelerated method to several challenging problems in the OpenAI Gym and Mujoco, and show that acceleration can significantly improve the performance of the classic temporal difference learning and REINFORCE algorithms.
We study convergence rates of the classic proximal bundle method for a variety of nonsmooth convex optimization problems. We show that, without any modification, this algorithm adapts to converge faster in the presence of smoothness or a Holder growth condition. Our analysis reveals that with a constant stepsize, the bundle method is adaptive, yet it exhibits suboptimal convergence rates. We overcome this shortcoming by proposing nonconstant stepsize schemes with optimal rates. These schemes use function information such as growth constants, which might be prohibitive in practice. We complete the paper with a new parallelizable variant of the bundle method that attains near-optimal rates without prior knowledge of function parameters. These results improve on the limited existing convergence rates and provide a unified analysis approach across problem settings and algorithmic details. Numerical experiments support our findings and illustrate the effectiveness of the parallel bundle method.
58 - Benjamin Grimmer 2019
Often in the analysis of first-order methods, assuming the existence of a quadratic growth bound (a generalization of strong convexity) facilitates much stronger convergence analysis. Hence the analysis is done twice, once for the general case and once for the growth bounded case. We give a meta-theorem for deriving general convergence rates from those assuming a growth lower bound. Applying this simple but conceptually powerful tool to the proximal point method, the subgradient method, and the bundle method immediately recovers their known convergence rates for general convex optimization problems from their specialized rates. Future works studying first-order methods can assume growth bounds for the sake of analysis without hampering the generality of the results. Our results can be applied to lift any rate based on a Holder growth bound. As a consequence, guarantees for minimizing sharp functions imply guarantees for both general functions and those satisfying quadratic growth.
91 - D. Leventhal , A.S. Lewis 2008
We study randomized variants of two classical algorithms: coordinate descent for systems of linear equations and iterated projections for systems of linear inequalities. Expanding on a recent randomized iterated projection algorithm of Strohmer and Vershynin for systems of linear equations, we show that, under appropriate probability distributions, the linear rates of convergence (in expectation) can be bounded in terms of natural linear-algebraic condition numbers for the problems. We relate these condition measures to distances to ill-posedness, and discuss generalizations to convex systems under metric regularity assumptions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا