ترغب بنشر مسار تعليمي؟ اضغط هنا

Superiorization vs. Accelerated Convex Optimization: The Superiorized/Regularized Least-Squares Case

67   0   0.0 ( 0 )
 نشر من قبل Stefania Petra Mrs.
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We conduct a study and comparison of superiorization and optimization approaches for the reconstruction problem of superiorized/regularized least-squares solutions of underdetermined linear equations with nonnegativity variable bounds. Regarding superiorization, the state of the art is examined for this problem class, and a novel approach is proposed that employs proximal mappings and is structurally similar to the established forward-backward optimization approach. Regarding convex optimization, accelerated forward-backward splitting with inexact proximal maps is worked out and applied to both the natural splitting least-squares term/regularizer and to the reverse splitting regularizer/least-squares term. Our numerical findings suggest that superiorization can approach the solution of the optimization problem and leads to comparable results at significantly lower costs, after appropriate parameter tuning. On the other hand, applying accelerated forward-backward optimization to the reverse splitting slightly outperforms superiorization, which suggests that convex optimization can approach superiorization too, using a suitable problem splitting.

قيم البحث

اقرأ أيضاً

72 - Jongho Park 2020
Based on an observation that additive Schwarz methods for general convex optimization can be interpreted as gradient methods, we propose an acceleration scheme for additive Schwarz methods. Adopting acceleration techniques developed for gradient meth ods such as momentum and adaptive restarting, the convergence rate of additive Schwarz methods is greatly improved. The proposed acceleration scheme does not require any a priori information on the levels of smoothness and sharpness of a target energy functional, so that it can be applied to various convex optimization problems. Numerical results for linear elliptic problems, nonlinear elliptic problems, nonsmooth problems, and nonsharp problems are provided to highlight the superiority and the broad applicability of the proposed scheme.
We introduce and analyze a space-time least-squares method associated to the unsteady Navier-Stokes system. Weak solution in the two dimensional case and regular solution in the three dimensional case are considered. From any initial guess, we constr uct a minimizing sequence for the least-squares functional which converges strongly to a solution of the Navier-Stokes system. After a finite number of iterates related to the value of the viscosity constant, the convergence is quadratic. Numerical experiments within the two dimensional case support our analysis. This globally convergent least-squares approach is related to the damped Newton method when used to solve the Navier-Stokes system through a variational formulation.
81 - Yuning Yang 2019
The epsilon alternating least squares ($epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is s hown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality assumption not stronger than those in the literature. These results completely address a question concerning the global convergence raised in [L. Wang, M. T. Chu and B. Yu, emph{SIAM J. Matrix Anal. Appl.}, 36 (2015), pp. 1--19]. In addition, an initialization procedure is proposed, which possesses a provable lower bound when the number of columnwisely orthonormal factors is one. Armed with this initialization procedure, numerical experiments show that the $epsilon$-ALS exhibits a promising performance in terms of efficiency and effectiveness.
The null distributed controllability of the semilinear heat equation $y_t-Delta y + g(y)=f ,1_{omega}$, assuming that $g$ satisfies the growth condition $g(s)/(vert svert log^{3/2}(1+vert svert))rightarrow 0$ as $vert svert rightarrow infty$ and that $g^primein L^infty_{loc}(mathbb{R})$ has been obtained by Fernandez-Cara and Zuazua in 2000. The proof based on a fixed point argument makes use of precise estimates of the observability constant for a linearized heat equation. It does not provide however an explicit construction of a null control. Assuming that $g^primein W^{s,infty}(mathbb{R})$ for one $sin (0,1]$, we construct an explicit sequence converging strongly to a null control for the solution of the semilinear equation. The method, based on a least-squares approach, generalizes Newton type methods and guarantees the convergence whatever be the initial element of the sequence. In particular, after a finite number of iterations, the convergence is super linear with a rate equal to $1+s$. Numerical experiments in the one dimensional setting support our analysis.
We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings. As an application of the general scheme we propose nearly optimal methods for minimizing smooth function s with Lipschitz derivatives of an arbitrary order, as well as for smooth minimax optimization problems. The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا