ترغب بنشر مسار تعليمي؟ اضغط هنا

Radial Duality Part II: Applications and Algorithms

67   0   0.0 ( 0 )
 نشر من قبل Benjamin Grimmer
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English
 تأليف Benjamin Grimmer




اسأل ChatGPT حول البحث

The first part of this work established the foundations of a radial duality between nonnegative optimization problems, inspired by the work of (Renegar, 2016). Here we utilize our radial duality theory to design and analyze projection-free optimization algorithms that operate by solving a radially dual problem. In particular, we consider radial subgradient, smoothing, and accelerated methods that are capable of solving a range of constrained convex and nonconvex optimization problems and that can scale-up more efficiently than their classic counterparts. These algorithms enjoy the same benefits as their predecessors, avoiding Lipschitz continuity assumptions and costly orthogonal projections, in our newfound, broader context. Our radial duality further allows us to understand the effects and benefits of smoothness and growth conditions on the radial dual and consequently on our radial algorithms.



قيم البحث

اقرأ أيضاً

101 - Benjamin Grimmer 2021
(Renegar, 2016) introduced a novel approach to transforming generic conic optimization problems into unconstrained, uniformly Lipschitz continuous minimization. We introduce radial transformations generalizing these ideas, equipped with an entirely n ew motivation and development that avoids any reliance on convex cones or functions. Perhaps of greatest practical importance, this facilitates the development of new families of projection-free first-order methods applicable even in the presence of nonconvex objectives and constraint sets. Our generalized construction of this radial transformation uncovers that it is dual (i.e., self-inverse) for a wide range of functions including all concave objectives. This gives a powerful new duality relating optimization problems to their radially dual problem. For a broad class of functions, we characterize continuity, differentiability, and convexity under the radial transformation as well as develop a calculus for it. This radial duality provides a strong foundation for designing projection-free radial optimization algorithms, which is carried out in the second part of this work.
This paper focuses on coordinate update methods, which are useful for solving problems involving large or high-dimensional datasets. They decompose a problem into simple subproblems, where each updates one, or a small block of, variables while fixing others. These methods can deal with linear and nonlinear mappings, smooth and nonsmooth functions, as well as convex and nonconvex problems. In addition, they are easy to parallelize. The great performance of coordinate update methods depends on solving simple sub-problems. To derive simple subproblems for several new classes of applications, this paper systematically studies coordinate-friendly operators that perform low-cost coordinate updates. Based on the discovered coordinate friendly operators, as well as operator splitting techniques, we obtain new coordinate update algorithms for a variety of problems in machine learning, image processing, as well as sub-areas of optimization. Several problems are treated with coordinate update for the first time in history. The obtained algorithms are scalable to large instances through parallel and even asynchronous computing. We present numerical examples to illustrate how effective these algorithms are.
Part I of this work [2] developed the exact diffusion algorithm to remove the bias that is characteristic of distributed solutions for deterministic optimization problems. The algorithm was shown to be applicable to a larger set of combination polici es than earlier approaches in the literature. In particular, the combination matrices are not required to be doubly stochastic, which impose stringent conditions on the graph topology and communications protocol. In this Part II, we examine the convergence and stability properties of exact diffusion in some detail and establish its linear convergence rate. We also show that it has a wider stability range than the EXTRA consensus solution, meaning that it is stable for a wider range of step-sizes and can, therefore, attain faster convergence rates. Analytical examples and numerical simulations illustrate the theoretical findings.
56 - Harry Buhrman 1998
The incompressibility method is an elementary yet powerful proof technique. It has been used successfully in many areas. To further demonstrate its power and elegance we exhibit new simple proofs using the incompressibility method.
We aim to give an overview on how to derive the dynamic programming principle for a general stochastic control/stopping problem, using measurable selection techniques. By considering their martingale problem formulation, we show how to check the required measurability conditions for differe
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا