ترغب بنشر مسار تعليمي؟ اضغط هنا

Computable Centering Methods for Spiraling Algorithms and their Duals, with Motivations from the theory of Lyapunov Functions

70   0   0.0 ( 0 )
 نشر من قبل Scott Lindstrom
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Splitting methods like Douglas--Rachford (DR), ADMM, and FISTA solve problems whose objectives are sums of functions that may be evaluated separately, and all frequently show signs of spiraling. We show for prototypical feasibility problems that circumcentered-reflection method (CRM), subgradient projections, and Newton--Raphson are all describable as gradient-based methods for minimizing Lyapunov functions constructed for DR operators, with the former returning the minimizers of spherical surrogates for the Lyapunov function. We study the more general class of operators that minimize such surrogates. In particular, we introduce a new method that shares these properties but with the added advantages that it: 1) does not rely on subproblems (e.g. reflections) and so may be applied for any operator whose iterates spiral; 2) provably has the aforementioned Lyapunov properties with few structural assumptions and so is generically suitable for primal/dual implementation; and 3) maps spaces of reduced dimension into themselves whenever the original operator does. This makes possible the first primal/dual implementation of a method that seeks the center of spiraling iterates, which we describe, and provide a computed example (basis pursuit).

قيم البحث

اقرأ أيضاً

We revisit the feasibility approach to the construction of compactly supported smooth orthogonal wavelets on the line. We highlight its flexibility and illustrate how symmetry and cardinality properties are easily embedded in the design criteria. We solve the resulting wavelet feasibility problems using recently introduced centering methods, and we compare performance. Solutions admit real-valued compactly supported smooth orthogonal scaling functions and wavelets with near symmetry and near cardinality properties.
This paper considers the problem of designing accelerated gradient-based algorithms for optimization and saddle-point problems. The class of objective functions is defined by a generalized sector condition. This class of functions contains strongly c onvex functions with Lipschitz gradients but also non-convex functions, which allows not only to address optimization problems but also saddle-point problems. The proposed design procedure relies on a suitable class of Lyapunov functions and on convex semi-definite programming. The proposed synthesis allows the design of algorithms that reach the performance of state-of-the-art accelerated gradient methods and beyond.
We propose a sampling-based approach to learn Lyapunov functions for a class of discrete-time autonomous hybrid systems that admit a mixed-integer representation. Such systems include autonomous piecewise affine systems, closed-loop dynamics of linea r systems with model predictive controllers, piecewise affine/linear complementarity/mixed-logical dynamical system in feedback with a ReLU neural network controller, etc. The proposed method comprises an alternation between a learner and a verifier to find a valid Lyapunov function inside a convex set of Lyapunov function candidates. In each iteration, the learner uses a collection of state samples to select a Lyapunov function candidate through a convex program in the parameter space. The verifier then solves a mixed-integer quadratic program in the state space to either validate the proposed Lyapunov function candidate or reject it with a counterexample, i.e., a state where the Lyapunov condition fails. This counterexample is then added to the sample set of the learner to refine the set of Lyapunov function candidates. By designing the learner and the verifier according to the analytic center cutting-plane method from convex optimization, we show that when the set of Lyapunov functions is full-dimensional in the parameter space, our method finds a Lyapunov function in a finite number of steps. We demonstrate our stability analysis method on closed-loop MPC dynamical systems and a ReLU neural network controlled PWA system.
313 - Long Chen , Hao Luo 2021
We present a unified convergence analysis for first order convex optimization methods using the concept of strong Lyapunov conditions. Combining this with suitable time scaling factors, we are able to handle both convex and strong convex cases, and e stablish contraction properties of Lyapunov functions for many existing ordinary differential equation models. Then we derive prevailing first order optimization algorithms, such as proximal gradient methods, heavy ball methods (also known as momentum methods), Nesterov accelerated gradient methods, and accelerated proximal gradient methods from numerical discretizations of corresponding dynamical systems. We also apply strong Lyapunov conditions to the discrete level and provide a more systematical analysis framework. Another contribution is a novel second order dynamical system called Hessian-driven Nesterov accelerated gradient flow which can be used to design and analyze accelerated first order methods for smooth and non-smooth convex optimizations.
We propose a learning-based method for Lyapunov stability analysis of piecewise affine dynamical systems in feedback with piecewise affine neural network controllers. The proposed method consists of an iterative interaction between a learner and a ve rifier, where in each iteration, the learner uses a collection of samples of the closed-loop system to propose a Lyapunov function candidate as the solution to a convex program. The learner then queries the verifier, which solves a mixed-integer program to either validate the proposed Lyapunov function candidate or reject it with a counterexample, i.e., a state where the stability condition fails. This counterexample is then added to the sample set of the learner to refine the set of Lyapunov function candidates. We design the learner and the verifier based on the analytic center cutting-plane method, in which the verifier acts as the cutting-plane oracle to refine the set of Lyapunov function candidates. We show that when the set of Lyapunov functions is full-dimensional in the parameter space, the overall procedure finds a Lyapunov function in a finite number of iterations. We demonstrate the utility of the proposed method in searching for quadratic and piecewise quadratic Lyapunov functions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا