ﻻ يوجد ملخص باللغة العربية
Non-convex sparse minimization (NSM), or $ell_0$-constrained minimization of convex loss functions, is an important optimization problem that has many machine learning applications. NSM is generally NP-hard, and so to exactly solve NSM is almost impossible in polynomial time. As regards the case of quadratic objective functions, exact algorithms based on quadratic mixed-integer programming (MIP) have been studied, but no existing exact methods can handle more general objective functions including Huber and logistic losses; this is unfortunate since those functions are prevalent in practice. In this paper, we consider NSM with $ell_2$-regularized convex objective functions and develop an algorithm by leveraging the efficiency of best-first search (BFS). Our BFS can compute solutions with objective errors at most $Deltage0$, where $Delta$ is a controllable hyper-parameter that balances the trade-off between the guarantee of objective errors and computation cost. Experiments demonstrate that our BFS is useful for solving moderate-size NSM instances with non-quadratic objectives and that BFS is also faster than the MIP-based method when applied to quadratic objectives.
Recent papers on approximation algorithms for the traveling salesman problem (TSP) have given a new variant on the well-known Christofides algorithm for the TSP, called the Best-of-Many Christofides algorithm. The algorithm involves sampling a spanni
Energy minimization has been an intensely studied core problem in computer vision. With growing image sizes (2D and 3D), it is now highly desirable to run energy minimization algorithms in parallel. But many existing algorithms, in particular, some e
We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function $f:mathbb{R}^n to mathbb{R}$ and its (sub)gradient. Our goal is to find an $epsilon$-approximate minimum of $f$ starting from
We propose two novel conditional gradient-based methods for solving structured stochastic convex optimization problems with a large number of linear constraints. Instances of this template naturally arise from SDP-relaxations of combinatorial problem
Optimization algorithms for solving nonconvex inverse problem have attracted significant interests recently. However, existing methods require the nonconvex regularization to be smooth or simple to ensure convergence. In this paper, we propose a nove