ترغب بنشر مسار تعليمي؟ اضغط هنا

A simple iterative algorithm for maxcut

72   0   0.0 ( 0 )
 نشر من قبل Sihong Shao
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a simple iterative (SI) algorithm for the maxcut problem through fully using an equivalent continuous formulation. It does not need rounding at all and has advantages that all subproblems have explicit analytic solutions, the cut values are monotonically updated and the iteration points converge to a local optima in finite steps via an appropriate subgradient selection. Numerical experiments on G-set demonstrate the performance. In particular, the ratios between the best cut values achieved by SI and the best known ones are at least $0.986$ and can be further improved to at least $0.997$ by a preliminary attempt to break out of local optima.



قيم البحث

اقرأ أيضاً

59 - Sihong Shao , Chuan Yang 2021
As a judicious correspondence to the classical maxcut, the anti-Cheeger cut has more balanced structure, but few numerical results on it have been reported so far. In this paper, we propose a continuous iterative algorithm for the anti-Cheeger cut pr oblem through fully using an equivalent continuous formulation. It does not need rounding at all and has advantages that all subproblems have explicit analytic solutions, the objection function values are monotonically updated and the iteration points converge to a local optima in finite steps via an appropriate subgradient selection. It can also be easily combined with the maxcut iterations for breaking out of local optima and improving the solution quality thanks to the similarity between the anti-Cheeger cut problem and the maxcut problem. Numerical experiments on G-set demonstrate the performance.
Sparse optimization is a central problem in machine learning and computer vision. However, this problem is inherently NP-hard and thus difficult to solve in general. Combinatorial search methods find the global optimal solution but are confined to sm all-sized problems, while coordinate descent methods are efficient but often suffer from poor local minima. This paper considers a new block decomposition algorithm that combines the effectiveness of combinatorial search methods and the efficiency of coordinate descent methods. Specifically, we consider a random strategy or/and a greedy strategy to select a subset of coordinates as the working set, and then perform a global combinatorial search over the working set based on the original objective function. We show that our method finds stronger stationary points than Amir Beck et al.s coordinate-wise optimization method. In addition, we establish the convergence rate of our algorithm. Our experiments on solving sparse regularized and sparsity constrained least squares optimization problems demonstrate that our method achieves state-of-the-art performance in terms of accuracy. For example, our method generally outperforms the well-known greedy pursuit method.
We describe a general purpose algorithm for counting simple cycles and simple paths of any length $ell$ on a (weighted di)graph on $N$ vertices and $M$ edges, achieving a time complexity of $Oleft(N+M+big(ell^omega+ellDeltabig) |S_ell|right)$. In thi s expression, $|S_ell|$ is the number of (weakly) connected induced subgraphs of $G$ on at most $ell$ vertices, $Delta$ is the maximum degree of any vertex and $omega$ is the exponent of matrix multiplication. We compare the algorithm complexity both theoretically and experimentally with most of the existing algorithms for the same task. These comparisons show that the algorithm described here is the best general purpose algorithm for the class of graphs where $(ell^{omega-1}Delta^{-1}+1) |S_ell|leq |text{Cycle}_ell|$, with $|text{Cycle}_ell|$ the total number of simple cycles of length at most $ell$, including backtracks and self-loops. On ErdH{o}s-Renyi random graphs, we find empirically that this happens when the edge probability is larger than circa $4/N$. In addition, we show that some real-world networks also belong to this class. Finally, the algorithm permits the enumeration of simple cycles and simple paths on networks where vertices are labeled from an alphabet on $n$ letters with a time complexity of $Oleft(N+M+big(n^ellell^omega+ellDeltabig) |S_ell|right)$. A Matlab implementation of the algorithm proposed here is available for download.
137 - Tao Hong , Irad Yavneh 2021
Nesterovs well-known scheme for accelerating gradient descent in convex optimization problems is adapted to accelerating stationary iterative solvers for linear systems. Compared with classical Krylov subspace acceleration methods, the proposed schem e requires more iterations, but it is trivial to implement and retains essentially the same computational cost as the unaccelerated method. An explicit formula for a fixed optimal parameter is derived in the case where the stationary iteration matrix has only real eigenvalues, based only on the smallest and largest eigenvalues. The fixed parameter, and corresponding convergence factor, are shown to maintain their optimality when the iteration matrix also has complex eigenvalues that are contained within an explicitly defined disk in the complex plane. A comparison to Chebyshev acceleration based on the same information of the smallest and largest real eigenvalues (dubbed Restricted Information Chebyshev acceleration) demonstrates that Nesterovs scheme is more robust in the sense that it remains optimal over a larger domain when the iteration matrix does have some complex eigenvalues. Numerical tests validate the efficiency of the proposed scheme. This work generalizes and extends the results of [1, Lemmas 3.1 and 3.2 and Theorem 3.3].
We give an approximation algorithm for MaxCut and provide guarantees on the average fraction of edges cut on $d$-regular graphs of girth $geq 2k$. For every $d geq 3$ and $k geq 4$, our approximation guarantees are better than those of all other clas sical and quantum algorithms known to the authors. Our algorithm constructs an explicit vector solution to the standard semidefinite relaxation of MaxCut and applies hyperplane rounding. It may be viewed as a simplification of the previously best known technique, which approximates Gaussian wave processes on the infinite $d$-regular tree.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا