ترغب بنشر مسار تعليمي؟ اضغط هنا

Convergence analysis of inexact descent algorithm for multiobjective optimizations on Riemannian manifolds without curvature constraints

114   0   0.0 ( 0 )
 نشر من قبل Xiangmei Wang
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the convergence issue for inexact descent algorithm (employing general step sizes) for multiobjective optimizations on general Riemannian manifolds (without curvature constraints). Under the assumption of the local convexity/quasi-convexity, local/global convergence results are established. On the other hand, without the assumption of the local convexity/quasi-convexity, but under a Kurdyka-{L}ojasiewicz-like condition, local/global linear convergence results are presented, which seem new even in Euclidean spaces setting and improve sharply the corresponding results in [24] in the case when the multiobjective optimization is reduced to the scalar case. Finally, for the special case when the inexact descent algorithm employing Armijo rule, our results improve sharply/extend the corresponding ones in [3,2,38].



قيم البحث

اقرأ أيضاً

We study the convergence issue for the gradient algorithm (employing general step sizes) for optimization problems on general Riemannian manifolds (without curvature constraints). Under the assumption of the local convexity/quasi-convexity (resp. wea k sharp minima), local/global convergence (resp. linear convergence) results are established. As an application, the linear convergence properties of the gradient algorithm employing the constant step sizes and the Armijo step sizes for finding the Riemannian $L^p$ ($pin[1,+infty)$) centers of mass are explored, respectively, which in particular extend and/or improve the corresponding results in cite{Afsari2013}.
Mirror descent (MD) is a powerful first-order optimization technique that subsumes several optimization algorithms including gradient descent (GD). In this work, we study the exact convergence rate of MD in both centralized and distributed cases for strongly convex and smooth problems. We view MD with a dynamical system lens and leverage quadratic constraints (QCs) to provide convergence guarantees based on the Lyapunov stability. For centralized MD, we establish a semi-definite programming (SDP) that certifies exponentially fast convergence of MD subject to a linear matrix inequality (LMI). We prove that the SDP always has a feasible solution that recovers the optimal GD rate. Next, we analyze the exponential convergence of distributed MD and characterize the rate using two LMIs. To the best of our knowledge, the exact (exponential) rate of distributed MD has not been previously explored in the literature. We present numerical results as a verification of our theory and observe that the richness of the Lyapunov function entails better (worst-case) convergence rates compared to existing works on distributed GD.
The Euclidean space notion of convex sets (and functions) generalizes to Riemannian manifolds in a natural sense and is called geodesic convexity. Extensively studied computational problems such as convex optimization and sampling in convex sets also have meaningful counterparts in the manifold setting. Geodesically convex optimization is a well-studied problem with ongoing research and considerable recent interest in machine learning and theoretical computer science. In this paper, we study sampling and convex optimization problems over manifolds of non-negative curvature proving polynomial running time in the dimension and other relevant parameters. Our algorithms assume a warm start. We first present a random walk based sampling algorithm and then combine it with simulated annealing for solving convex optimization problems. To our knowledge, these are the first algorithms in the general setting of positively curved manifolds with provable polynomial guarantees under reasonable assumptions, and the first study of the connection between sampling and optimization in this setting.
Aggregation functions largely determine the convergence and diversity performance of multi-objective evolutionary algorithms in decomposition methods. Nevertheless, the traditional Tchebycheff function does not consider the matching relationship betw een the weight vectors and candidate solutions. In this paper, the concept of matching degree is proposed which employs vectorial angles between weight vectors and candidate solutions. Based on the matching degree, a new modified Tchebycheff aggregation function is proposed, which integrates matching degree into the Tchebycheff aggregation function. Moreover, the proposed decomposition method has the same functionality with the Tchebycheff aggregation function. Based on the proposed decomposition approach, a new multiobjective optimization algorithm named decomposition-based multi-objective state transition algorithm is proposed. Relevant experimental results show that the proposed algorithm is highly competitive in comparison with other state-of-the-art multiobjetive optimization algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا