ترغب بنشر مسار تعليمي؟ اضغط هنا

Non-monotone Behavior of the Heavy Ball Method

70   0   0.0 ( 0 )
 نشر من قبل Marina Danilova
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We focus on the solutions of second-order stable linear difference equations and demonstrate that their behavior can be non-monotone and exhibit peak effects depending on initial conditions. The results are applied to the analysis of the accelerated unconstrained optimization method -- the Heavy Ball method. We explain non-standard behavior of the method discovered in practical applications. In addition, such non-monotonicity complicates the correct choice of the parameters in optimization methods. We propose to overcome this difficulty by introducing new Lyapunov function which should decrease monotonically. By use of this function convergence of the method is established under less restrictive assumptions (for instance, with the lack of convexity). We also suggest some restart techniques to speed up the methods convergence.



قيم البحث

اقرأ أيضاً

In this paper, we revisit the convergence of the Heavy-ball method, and present improved convergence complexity results in the convex setting. We provide the first non-ergodic O(1/k) rate result of the Heavy-ball algorithm with constant step size for coercive objective functions. For objective functions satisfying a relaxed strongly convex condition, the linear convergence is established under weaker assumptions on the step size and inertial parameter than made in the existing literature. We extend our results to multi-block version of the algorithm with both the cyclic and stochastic update rules. In addition, our results can also be extended to decentralized optimization, where the ergodic analysis is not applicable.
60 - Tao Sun , Dongsheng Li , Zhe Quan 2019
Nonconvex optimization algorithms with random initialization have attracted increasing attention recently. It has been showed that many first-order methods always avoid saddle points with random starting points. In this paper, we answer a question: c an the nonconvex heavy-ball algorithms with random initialization avoid saddle points? The answer is yes! Direct using the existing proof technique for the heavy-ball algorithms is hard due to that each iteration of the heavy-ball algorithm consists of current and last points. It is impossible to formulate the algorithms as iteration like xk+1= g(xk) under some mapping g. To this end, we design a new mapping on a new space. With some transfers, the heavy-ball algorithm can be interpreted as iterations after this mapping. Theoretically, we prove that heavy-ball gradient descent enjoys larger stepsize than the gradient descent to escape saddle points to escape the saddle point. And the heavy-ball proximal point algorithm is also considered; we also proved that the algorithm can always escape the saddle point.
We develop a distributed algorithm for convex Empirical Risk Minimization, the problem of minimizing large but finite sum of convex functions over networks. The proposed algorithm is derived from directly discretizing the second-order heavy-ball diff erential equation and results in an accelerated convergence rate, i.e, faster than distributed gradient descent-based methods for strongly convex objectives that may not be smooth. Notably, we achieve acceleration without resorting to the well-known Nesterovs momentum approach. We provide numerical experiments and contrast the proposed method with recently proposed optimal distributed optimization algorithms.
The paper investigates the throughput behavior of single-commodity dynamical flow networks governed by monotone distributed routing policies. The networks are modeled as systems of ODEs based on mass conversation laws on directed graphs with limited flow capacities on the links and constant external inflows at certain origin nodes. Under monotonicity assumptions on the routing policies, it is proven that a globally asymptotically stable equilibrium exists so that the network achieves maximal throughput, provided that no cut capacity constraint is violated by the external inflows. On the contrary, should such a constraint be violated, the network overload behavior is characterized. In particular, it is established that there exists a cut with respect to which the flow densities on every link grow linearly over time (resp. reach their respective limits simultaneously) in the case where the buffer capacities are infinite (resp. finite). The results employ an $l_1$-contraction principle for monotone dynamical systems.
In this paper, we propose a new non-monotone conjugate gradient method for solving unconstrained nonlinear optimization problems. We first modify the non-monotone line search method by introducing a new trigonometric function to calculate the non-mon otone parameter, which plays an essential role in the algorithms efficiency. Then, we apply a convex combination of the Barzilai-Borwein method for calculating the value of step size in each iteration. Under some suitable assumptions, we prove that the new algorithm has the global convergence property. The efficiency and effectiveness of the proposed method are determined in practice by applying the algorithm to some standard test problems and non-negative matrix factorization problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا