Do you want to publish a course? Click here

Non-Convex Split Feasibility Problems: Models, Algorithms and Theory

114   0   0.0 ( 0 )
 Added by Aviv Gibali
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we propose a catalog of iterative methods for solving the Split Feasibility Problem in the non-convex setting. We study four different optimization formulations of the problem, where each model has advantageous in different settings of the problem. For each model, we study relevant iterative algorithms, some of which are well-known in this area and some are new. All the studied methods, including the well-known CQ Algorithm, are proven to have global convergence guarantees in the non-convex setting under mild conditions on the problems data.



rate research

Read More

We propose finitely convergent methods for solving convex feasibility problems defined over a possibly infinite pool of constraints. Following other works in this area, we assume that the interior of the solution set is nonempty and that certain overrelaxation parameters form a divergent series. We combine our methods with a very general class of deterministic control sequences where, roughly speaking, we require that sooner or later we encounter a violated constraint if one exists. This requirement is satisfied, in particular, by the cyclic, repetitive and remotest set controls. Moreover, it is almost surely satisfied for random controls.
In this study, we present a general framework of outer approximation algorithms to solve convex vector optimization problems, in which the Pascoletti-Serafini (PS) scalarization is solved iteratively. This scalarization finds the minimum distance from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. We propose efficient methods to select the parameters (the reference point and direction vector) of the PS scalarization and analyze the effects of these on the overall performance of the algorithm. Different from the existing vertex selection rules from the literature, the proposed methods do not require solving additional single-objective optimization problems. Using some test problems, we conduct an extensive computational study where three different measures are set as the stopping criteria: the approximation error, the runtime, and the cardinality of solution set. We observe that the proposed variants have satisfactory results especially in terms of runtime compared to the existing variants from the literature.
This paper is to analyze the approximation solution of a split variational inclusion problem in the framework of infinite dimensional Hilbert spaces. For this purpose, several inertial hybrid and shrinking projection algorithms are proposed under the effect of self-adaptive stepsizes which does not require information of the norms of the given operators. Some strong convergence properties of the proposed algorithms are obtained under mild constraints. Finally, an experimental application is given to illustrate the performances of proposed methods by comparing existing results.
We study the robustness of accelerated first-order algorithms to stochastic uncertainties in gradient evaluation. Specifically, for unconstrained, smooth, strongly convex optimization problems, we examine the mean-squared error in the optimization variable when the iterates are perturbed by additive white noise. This type of uncertainty may arise in situations where an approximation of the gradient is sought through measurements of a real system or in a distributed computation over a network. Even though the underlying dynamics of first-order algorithms for this class of problems are nonlinear, we establish upper bounds on the mean-squared deviation from the optimal solution that are tight up to constant factors. Our analysis quantifies fundamental trade-offs between noise amplification and convergence rates obtained via any acceleration scheme similar to Nesterovs or heavy-ball methods. To gain additional analytical insight, for strongly convex quadratic problems, we explicitly evaluate the steady-state variance of the optimization variable in terms of the eigenvalues of the Hessian of the objective function. We demonstrate that the entire spectrum of the Hessian, rather than just the extreme eigenvalues, influence robustness of noisy algorithms. We specialize this result to the problem of distributed averaging over undirected networks and examine the role of network size and topology on the robustness of noisy accelerated algorithms.
We study variational inequalities which are governed by a strongly monotone and Lipschitz continuous operator $F$ over a closed and convex set $S$. We assume that $S=Ccap A^{-1}(Q)$ is the nonempty solution set of a (multiple-set) split convex feasibility problem, where $C$ and $Q$ are both closed and convex subsets of two real Hilbert spaces $mathcal H_1$ and $mathcal H_2$, respectively, and the operator $A$ acting between them is linear. We consider a modification of the gradient projection method the main idea of which is to replace at each step the metric projection onto $S$ by another metric projection onto a half-space which contains $S$. We propose three variants of a method for constructing the above-mentioned half-spaces by employing the multiple-set and the split structure of the set $S$. For the split part we make use of the Landweber transform.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا