ﻻ يوجد ملخص باللغة العربية
The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problems subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call zero-convexity. This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.
We propose finitely convergent methods for solving convex feasibility problems defined over a possibly infinite pool of constraints. Following other works in this area, we assume that the interior of the solution set is nonempty and that certain over
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization
We revisit the feasibility approach to the construction of compactly supported smooth orthogonal wavelets on the line. We highlight its flexibility and illustrate how symmetry and cardinality properties are easily embedded in the design criteria. We
We introduce a geometrically transparent strict saddle property for nonsmooth functions. This property guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. We argue that th
Navigation tasks often cannot be defined in terms of a target, either because global position information is unavailable or unreliable or because target location is not explicitly known a priori. This task is then often defined indirectly as a source