ترغب بنشر مسار تعليمي؟ اضغط هنا

Continuous iterative algorithms for anti-Cheeger cut

60   0   0.0 ( 0 )
 نشر من قبل Sihong Shao
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As a judicious correspondence to the classical maxcut, the anti-Cheeger cut has more balanced structure, but few numerical results on it have been reported so far. In this paper, we propose a continuous iterative algorithm for the anti-Cheeger cut problem through fully using an equivalent continuous formulation. It does not need rounding at all and has advantages that all subproblems have explicit analytic solutions, the objection function values are monotonically updated and the iteration points converge to a local optima in finite steps via an appropriate subgradient selection. It can also be easily combined with the maxcut iterations for breaking out of local optima and improving the solution quality thanks to the similarity between the anti-Cheeger cut problem and the maxcut problem. Numerical experiments on G-set demonstrate the performance.



قيم البحث

اقرأ أيضاً

We propose a simple iterative (SI) algorithm for the maxcut problem through fully using an equivalent continuous formulation. It does not need rounding at all and has advantages that all subproblems have explicit analytic solutions, the cut values ar e monotonically updated and the iteration points converge to a local optima in finite steps via an appropriate subgradient selection. Numerical experiments on G-set demonstrate the performance. In particular, the ratios between the best cut values achieved by SI and the best known ones are at least $0.986$ and can be further improved to at least $0.997$ by a preliminary attempt to break out of local optima.
This article derives lower bounds on the convergence rate of continuous-time gradient-based optimization algorithms. The algorithms are subjected to a time-normalization constraint that avoids a reparametrization of time in order to make the discussi on of continuous-time convergence rates meaningful. We reduce the multi-dimensional problem to a single dimension, recover well-known lower bounds from the discrete-time setting, and provide insight into why these lower bounds occur. We present algorithms that achieve the proposed lower bounds, even when the function class under consideration includes certain nonconvex functions.
137 - Tao Hong , Irad Yavneh 2021
Nesterovs well-known scheme for accelerating gradient descent in convex optimization problems is adapted to accelerating stationary iterative solvers for linear systems. Compared with classical Krylov subspace acceleration methods, the proposed schem e requires more iterations, but it is trivial to implement and retains essentially the same computational cost as the unaccelerated method. An explicit formula for a fixed optimal parameter is derived in the case where the stationary iteration matrix has only real eigenvalues, based only on the smallest and largest eigenvalues. The fixed parameter, and corresponding convergence factor, are shown to maintain their optimality when the iteration matrix also has complex eigenvalues that are contained within an explicitly defined disk in the complex plane. A comparison to Chebyshev acceleration based on the same information of the smallest and largest real eigenvalues (dubbed Restricted Information Chebyshev acceleration) demonstrates that Nesterovs scheme is more robust in the sense that it remains optimal over a larger domain when the iteration matrix does have some complex eigenvalues. Numerical tests validate the efficiency of the proposed scheme. This work generalizes and extends the results of [1, Lemmas 3.1 and 3.2 and Theorem 3.3].
We examine popular gradient-based algorithms for nonlinear control in the light of the modern complexity analysis of first-order optimization algorithms. The examination reveals that the complexity bounds can be clearly stated in terms of calls to a computational oracle related to dynamic programming and implementable by gradient back-propagation using machine learning software libraries such as PyTorch or TensorFlow. Finally, we propose a regularized Gauss-Newton algorithm enjoying worst-case complexity bounds and improved convergence behavior in practice. The software library based on PyTorch is publicly available.
111 - Hongpeng Sun , Xuecheng Tai , 2020
The Potts model has many applications. It is equivalent to some min-cut and max-flow models. Primal-dual algorithms have been used to solve these problems. Due to the special structure of the models, convergence proof is still a difficult problem. In this work, we developed two novel, preconditioned, and over-relaxed alternating direction methods of multipliers (ADMM) with convergence guarantee for these models. Using the proposed preconditioners or block preconditioners, we get accelerations with the over-relaxation variants of preconditioned ADMM. The preconditioned and over-relaxed Douglas-Rachford splitting methods are also considered for the Potts model. Our framework can handle both the two-labeling or multi-labeling problems with appropriate block preconditioners based on Eckstein-Bertsekas and Fortin-Glowinski splitting techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا