ﻻ يوجد ملخص باللغة العربية
Splitting methods like Douglas--Rachford (DR), ADMM, and FISTA solve problems whose objectives are sums of functions that may be evaluated separately, and all frequently show signs of spiraling. We show for prototypical feasibility problems that circumcentered-reflection method (CRM), subgradient projections, and Newton--Raphson are all describable as gradient-based methods for minimizing Lyapunov functions constructed for DR operators, with the former returning the minimizers of spherical surrogates for the Lyapunov function. We study the more general class of operators that minimize such surrogates. In particular, we introduce a new method that shares these properties but with the added advantages that it: 1) does not rely on subproblems (e.g. reflections) and so may be applied for any operator whose iterates spiral; 2) provably has the aforementioned Lyapunov properties with few structural assumptions and so is generically suitable for primal/dual implementation; and 3) maps spaces of reduced dimension into themselves whenever the original operator does. This makes possible the first primal/dual implementation of a method that seeks the center of spiraling iterates, which we describe, and provide a computed example (basis pursuit).
We revisit the feasibility approach to the construction of compactly supported smooth orthogonal wavelets on the line. We highlight its flexibility and illustrate how symmetry and cardinality properties are easily embedded in the design criteria. We
This paper considers the problem of designing accelerated gradient-based algorithms for optimization and saddle-point problems. The class of objective functions is defined by a generalized sector condition. This class of functions contains strongly c
We propose a sampling-based approach to learn Lyapunov functions for a class of discrete-time autonomous hybrid systems that admit a mixed-integer representation. Such systems include autonomous piecewise affine systems, closed-loop dynamics of linea
We present a unified convergence analysis for first order convex optimization methods using the concept of strong Lyapunov conditions. Combining this with suitable time scaling factors, we are able to handle both convex and strong convex cases, and e
We propose a learning-based method for Lyapunov stability analysis of piecewise affine dynamical systems in feedback with piecewise affine neural network controllers. The proposed method consists of an iterative interaction between a learner and a ve