ترغب بنشر مسار تعليمي؟ اضغط هنا

Analysis of Krylov Subspace Solutions of Regularized Nonconvex Quadratic Problems

108   0   0.0 ( 0 )
 نشر من قبل Yair Carmon
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We provide convergence rates for Krylov subspace solutions to the trust-region and cubic-regularized (nonconvex) quadratic problems. Such solutions may be efficiently computed by the Lanczos method and have long been used in practice. We prove error bounds of the form $1/t^2$ and $e^{-4t/sqrt{kappa}}$, where $kappa$ is a condition number for the problem, and $t$ is the Krylov subspace order (number of Lanczos iterations). We also provide lower bounds showing that our analysis is sharp.

قيم البحث

اقرأ أيضاً

170 - Yair Carmon , John C. Duchi 2020
We consider minimization of indefinite quadratics with either trust-region (norm) constraints or cubic regularization. Despite the nonconvexity of these problems we prove that, under mild assumptions, gradient descent converges to their global soluti ons, and give a non-asymptotic rate of convergence for the cubic variant. We also consider Krylov subspace solutions and establish sharp convergence guarantees to the solutions of both trust-region and cubic-regularized problems. Our rates mirror the behavior of these methods on convex quadratics and eigenvector problems, highlighting their scalability. When we use Krylov subspace solutions to approximate the cubic-regularized Newton step, our results recover the strongest known convergence guarantees to approximate second-order stationary points of general smooth nonconvex functions.
We apply novel inner-iteration preconditioned Krylov subspace methods to the interior-point algorithm for linear programming (LP). Inner-iteration preconditioners recently proposed by Morikuni and Hayami enable us to overcome the severe ill-condition ing of linear equations solved in the final phase of interior-point iterations. The Krylov subspace methods do not suffer from rank-deficiency and therefore no preprocessing is necessary even if rows of the constraint matrix are not linearly independent. By means of these methods, a new interior-point recurrence is proposed in order to omit one matrix-vector product at each step. Extensive numerical experiments are conducted over diverse instances of 138 LP problems including the Netlib, QAPLIB, Mittelmann and Atomizer Basis Pursuit collections. The largest problem has 434,580 unknowns. It turns out that our implementation is more robust than the standard public domain solvers SeDuMi (Self-Dual Minimization), SDPT3 (Semidefinite Programming Toh-Todd-Tutuncu) and the LSMR iterative solver in PDCO (Primal-Dual Barrier Method for Convex Objectives) without increasing CPU time. The proposed interior-point method based on iterative solvers succeeds in solving a fairly large number of LP instances from benchmark libraries under the standard stopping criteria. The work also presents a fairly extensive benchmark test for several renowned solvers including direct and iterative solvers.
We consider the problem of minimizing a block separable convex function (possibly nondifferentiable, and including constraints) plus Laplacian regularization, a problem that arises in applications including model fitting, regularizing stratified mode ls, and multi-period portfolio optimization. We develop a distributed majorization-minimization method for this general problem, and derive a complete, self-contained, general, and simple proof of convergence. Our method is able to scale to very large problems, and we illustrate our approach on two applications, demonstrating its scalability and accuracy.
A previous authors paper introduces an accelerated composite gradient (ACG) variant, namely AC-ACG, for solving nonconvex smooth composite optimization (N-SCO) problems. In contrast to other ACG variants, AC-ACG estimates the local upper curvature of the N-SCO problem by using the average of the observed upper-Lipschitz curvatures obtained during the previous iterations, and uses this estimation and two composite resolvent evaluations to compute the next iterate. This paper presents an alternative FISTA-type ACG variant, namely AC-FISTA, which has the following additional features: i) it performs an average of one composite resolvent evaluation per iteration; and ii) it estimates the local upper curvature by using the average of the previously observed upper (instead of upper-Lipschitz) curvatures. These two properties acting together yield a practical AC-FISTA variant which substantially outperforms earlier ACG variants, including the AC-ACG variants discussed in the aforementioned authors paper.
For some typical and widely used non-convex half-quadratic regularization models and the Ambrosio-Tortorelli approximate Mumford-Shah model, based on the Kurdyka-L ojasiewicz analysis and the recent nonconvex proximal algorithms, we developed an effi cient preconditioned framework aiming at the linear subproblems that appeared in the nonlinear alternating minimization procedure. Solving large-scale linear subproblems is always important and challenging for lots of alternating minimization algorithms. By cooperating the efficient and classical preconditioned iterations into the nonlinear and nonconvex optimization, we prove that only one or any finite times preconditioned iterations are needed for the linear subproblems without controlling the error as the usual inexact solvers. The proposed preconditioned framework can provide great flexibility and efficiency for dealing with linear subproblems and guarantee the global convergence of the nonlinear alternating minimization method simultaneously.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا