ترغب بنشر مسار تعليمي؟ اضغط هنا

Convex Optimization for Parallel Energy Minimization

239   0   0.0 ( 0 )
 نشر من قبل Karri Sesh-Kumar
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Energy minimization has been an intensely studied core problem in computer vision. With growing image sizes (2D and 3D), it is now highly desirable to run energy minimization algorithms in parallel. But many existing algorithms, in particular, some efficient combinatorial algorithms, are difficult to par-allelize. By exploiting results from convex and submodular theory, we reformulate the quadratic energy minimization problem as a total variation denoising problem, which, when viewed geometrically, enables the use of projection and reflection based convex methods. The resulting min-cut algorithm (and code) is conceptually very simple, and solves a sequence of TV denoising problems. We perform an extensive empirical evaluation comparing state-of-the-art combinatorial algorithms and convex optimization techniques. On small problems the iterative convex methods match the combinatorial max-flow algorithms, while on larger problems they offer other flexibility and important gains: (a) their memory footprint is small; (b) their straightforward parallelizability fits multi-core platforms; (c) they can easily be warm-started; and (d) they quickly reach approximately good solutions, thereby enabling faster inexact solutions. A key consequence of our approach based on submodularity and convexity is that it is allows to combine any arbitrary combinatorial or convex methods as subroutines, which allows one to obtain hybrid combinatorial and convex optimization algorithms that benefit from the strengths of both.



قيم البحث

اقرأ أيضاً

We propose two novel conditional gradient-based methods for solving structured stochastic convex optimization problems with a large number of linear constraints. Instances of this template naturally arise from SDP-relaxations of combinatorial problem s, which involve a number of constraints that is polynomial in the problem dimension. The most important feature of our framework is that only a subset of the constraints is processed at each iteration, thus gaining a computational advantage over prior works that require full passes. Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees. Preliminary numerical experiments are provided for illustrating the practical performance of the methods.
Non-convex sparse minimization (NSM), or $ell_0$-constrained minimization of convex loss functions, is an important optimization problem that has many machine learning applications. NSM is generally NP-hard, and so to exactly solve NSM is almost impo ssible in polynomial time. As regards the case of quadratic objective functions, exact algorithms based on quadratic mixed-integer programming (MIP) have been studied, but no existing exact methods can handle more general objective functions including Huber and logistic losses; this is unfortunate since those functions are prevalent in practice. In this paper, we consider NSM with $ell_2$-regularized convex objective functions and develop an algorithm by leveraging the efficiency of best-first search (BFS). Our BFS can compute solutions with objective errors at most $Deltage0$, where $Delta$ is a controllable hyper-parameter that balances the trade-off between the guarantee of objective errors and computation cost. Experiments demonstrate that our BFS is useful for solving moderate-size NSM instances with non-quadratic objectives and that BFS is also faster than the MIP-based method when applied to quadratic objectives.
Recent work has shown how to embed differentiable optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined convex programs, a subclass of convex optimization problems used by domain-specific languages (DSLs) for convex optimization. We introduce disciplined parametrized programming, a subset of disciplined convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solvers solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire convex program. We implement our methodology in version 1.1 of CVXPY, a popular Python-embedded DSL for convex optimization, and additionally implement differentiable layers for disciplined convex programs in PyTorch and TensorFlow 2.0. Our implementation significantly lowers the barrier to using convex optimization problems in differentiable programs. We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work.
We propose a new majorization-minimization (MM) method for non-smooth and non-convex programs, which is general enough to include the existing MM methods. Besides the local majorization condition, we only require that the difference between the direc tional derivatives of the objective function and its surrogate function vanishes when the number of iterations approaches infinity, which is a very weak condition. So our method can use a surrogate function that directly approximates the non-smooth objective function. In comparison, all the existing MM methods construct the surrogate function by approximating the smooth component of the objective function. We apply our relaxed MM methods to the robust matrix factorization (RMF) problem with different regularizations, where our locally majorant algorithm shows advantages over the state-of-the-art approaches for RMF. This is the first algorithm for RMF ensuring, without extra assumptions, that any limit point of the iterates is a stationary point.
215 - V.N. Temlyakov 2012
This paper is a follow up to the previous authors paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modif ied there three the most popular in nonlinear approximation in Banach spaces greedy algorithms -- Weak Chebyshev Greedy Algorithm, Weak Greedy Algorithm with Free Relaxation and Weak Relaxed Greedy Algorithm -- for solving convex optimization problems. We continue to study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate solutions using different greedy-type algorithms. In this paper we concentrate on greedy algorithms that provide expansions, which means that the approximant at the $m$th iteration is equal to the sum of the approximant from the previous iteration ($(m-1)$th iteration) and one element from the dictionary with an appropriate coefficient. The problem of greedy expansions of elements of a Banach space is well studied in nonlinear approximation theory. At a first glance the setting of a problem of expansion of a given element and the setting of the problem of expansion in an optimization problem are very different. However, it turns out that the same technique can be used for solving both problems. We show how the technique developed in nonlinear approximation theory, in particular, the greedy expansions technique can be adjusted for finding a sparse solution of an optimization problem given by an expansion with respect to a given dictionary.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا