Do you want to publish a course? Click here

On Local Minimizers of Quadratically Constrained Nonconvex Homogeneous Quadratic Optimization with at Most Two Constraints

369   0   0.0 ( 0 )
 Added by Yong Xia
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

We study nonconvex homogeneous quadratically constrained quadratic optimization with one or two constraints, denoted by (QQ1) and (QQ2), respectively. (QQ2) contains (QQ1), trust region subproblem (TRS) and ellipsoid regularized total least squares problem as special cases. It is known that there is a necessary and sufficient optimality condition for the global minimizer of (QQ2). In this paper, we first show that any local minimizer of (QQ1) is globally optimal. Unlike its special case (TRS) with at most one local non-global minimizer, (QQ2) may have infinitely many local non-global minimizers. At any local non-global minimizer of (QQ2), both linearly independent constraint qualification and strict complementary condition hold, and the Hessian of the Lagrangian has exactly one negative eigenvalue. As a main contribution, we prove that the standard second-order sufficient optimality condition for any strict local non-global minimizer of (QQ2) remains necessary. Applications and the impossibility of further extension are discussed.

rate research

Read More

We prove that a special variety of quadratically constrained quadratic programs, occurring frequently in conjunction with the design of wave systems obeying causality and passivity (i.e. systems with bounded response), universally exhibit strong duality. Directly, the problem of continuum (grayscale or effective medium) device design for any (complex) quadratic wave objective governed by independent quadratic constraints can be solved as a convex program. The result guarantees that performance limits for many common physical objectives can be made nearly tight, and suggests far-reaching implications for problems in optics, acoustics, and quantum mechanics.
For some typical and widely used non-convex half-quadratic regularization models and the Ambrosio-Tortorelli approximate Mumford-Shah model, based on the Kurdyka-L ojasiewicz analysis and the recent nonconvex proximal algorithms, we developed an efficient preconditioned framework aiming at the linear subproblems that appeared in the nonlinear alternating minimization procedure. Solving large-scale linear subproblems is always important and challenging for lots of alternating minimization algorithms. By cooperating the efficient and classical preconditioned iterations into the nonlinear and nonconvex optimization, we prove that only one or any finite times preconditioned iterations are needed for the linear subproblems without controlling the error as the usual inexact solvers. The proposed preconditioned framework can provide great flexibility and efficiency for dealing with linear subproblems and guarantee the global convergence of the nonlinear alternating minimization method simultaneously.
A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives.
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of o(1/sqrt{k}), where k is the number of iterations. Moreover, under an additional error bound condition (to be called VP-EB in the paper), we further show that the convergence rate is in fact linear. Finally, we show that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.
Generalized trust-region subproblem (GT) is a nonconvex quadratic optimization with a single quadratic constraint. It reduces to the classical trust-region subproblem (T) if the constraint set is a Euclidean ball. (GT) is polynomially solvable based on its inherent hidden convexity. In this paper, we study local minimizers of (GT). Unlike (T) with at most one local nonglobal minimizer, we can prove that two-dimensional (GT) has at most two local nonglobal minimizers, which are shown by example to be attainable. The main contribution of this paper is to prove that, at any local nonglobal minimizer of (GT), not only the strict complementarity condition holds, but also the standard second-order sufficient optimality condition remains necessary. As a corollary, finding all local nonglobal minimizers of (GT) or proving the nonexistence can be done in polynomial time. Finally, for (GT) in complex domain, we prove that there is no local nonglobal minimizer, which demonstrates that real-valued optimization problem may be more difficult to solve than its complex version.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا