Do you want to publish a course? Click here

Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

202   0   0.0 ( 0 )
 Added by Yair Censor
 Publication date 2014
  fields
and research's language is English




Ask ChatGPT about the research

The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problems subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call zero-convexity. This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.



rate research

Read More

We propose finitely convergent methods for solving convex feasibility problems defined over a possibly infinite pool of constraints. Following other works in this area, we assume that the interior of the solution set is nonempty and that certain overrelaxation parameters form a divergent series. We combine our methods with a very general class of deterministic control sequences where, roughly speaking, we require that sooner or later we encounter a violated constraint if one exists. This requirement is satisfied, in particular, by the cyclic, repetitive and remotest set controls. Moreover, it is almost surely satisfied for random controls.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
We revisit the feasibility approach to the construction of compactly supported smooth orthogonal wavelets on the line. We highlight its flexibility and illustrate how symmetry and cardinality properties are easily embedded in the design criteria. We solve the resulting wavelet feasibility problems using recently introduced centering methods, and we compare performance. Solutions admit real-valued compactly supported smooth orthogonal scaling functions and wavelets with near symmetry and near cardinality properties.
We introduce a geometrically transparent strict saddle property for nonsmooth functions. This property guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. We argue that the strict saddle property may be a realistic assumption in applications, since it provably holds for generic semi-algebraic optimization problems.
Navigation tasks often cannot be defined in terms of a target, either because global position information is unavailable or unreliable or because target location is not explicitly known a priori. This task is then often defined indirectly as a source seeking problem in which the autonomous agent navigates so as to minimize the convex potential induced by a source while avoiding obstacles. This work addresses this problem when only scalar measurements of the potential are available, i.e., without gradient information. To do so, it construct an artificial potential over which an exact gradient dynamics would generate a collision-free trajectory to the target in a world with convex obstacles. Then, leveraging extremum seeking control loops, it minimizes this artificial potential to navigate smoothly to the source location. We prove that the proposed solution not only finds the source, but does so while avoiding any obstacle. Numerical results with velocity-actuated particles, simulations with an omni-directional robot in ROS+Gazebo, and a robot-in-the-loop experiment are used to illustrate the performance of this approach.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا