ترغب بنشر مسار تعليمي؟ اضغط هنا

A Hybrid Quasi-Newton Projected-Gradient Method with Application to Lasso and Basis-Pursuit Denoise

53   0   0.0 ( 0 )
 نشر من قبل Ewout van den Berg
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose a new algorithm for the optimization of convex functions over a polyhedral set in Rn. The algorithm extends the spectral projected-gradient method with limited-memory BFGS iterates restricted to the present face whenever possible. We prove convergence of the algorithm under suitable conditions and apply the algorithm to solve the Lasso problem, and consequently, the basis-pursuit denoise problem through the root-finding framework proposed by van den Berg and Friedlander [SIAM Journal on Scientific Computing, 31(2), 2008]. The algorithm is especially well suited to simple domains and could also be used to solve bound-constrained problems as well as problems restricted to the simplex.



قيم البحث

اقرأ أيضاً

For the minimization of a nonlinear cost functional $j$ under convex constraints the relaxed projected gradient process $varphi_{k+1} = varphi_{k} + alpha_k(P_H(varphi_{k}-lambda_k abla_H j(varphi_{k}))-varphi_{k})$ is a well known method. The analy sis is classically performed in a Hilbert space $H$. We generalize this method to functionals $j$ which are differentiable in a Banach space. Thus it is possible to perform e.g. an $L^2$ gradient method if $j$ is only differentiable in $L^infty$. We show global convergence using Armijo backtracking in $alpha_k$ and allow the inner product and the scaling $lambda_k$ to change in every iteration. As application we present a structural topology optimization problem based on a phase field model, where the reduced cost functional $j$ is differentiable in $H^1cap L^infty$. The presented numerical results using the $H^1$ inner product and a pointwise chosen metric including second order information show the expected mesh independency in the iteration numbers. The latter yields an additional, drastic decrease in iteration numbers as well as in computation time. Moreover we present numerical results using a BFGS update of the $H^1$ inner product for further optimization problems based on phase field models.
Conic optimization is the minimization of a differentiable convex objective function subject to conic constraints. We propose a novel primal-dual first-order method for conic optimization, named proportional-integral projected gradient method (PIPG). PIPG ensures that both the primal-dual gap and the constraint violation converge to zero at the rate of (O(1/k)), where (k) is the number of iterations. If the objective function is strongly convex, PIPG improves the convergence rate of the primal-dual gap to (O(1/k^2)). Further, unlike any existing first-order methods, PIPG also improves the convergence rate of the constraint violation to (O(1/k^3)). We demonstrate the application of PIPG in constrained optimal control problems.
Quasi-Newton techniques approximate the Newton step by estimating the Hessian using the so-called secant equations. Some of these methods compute the Hessian using several secant equations but produce non-symmetric updates. Other quasi-Newton schemes , such as BFGS, enforce symmetry but cannot satisfy more than one secant equation. We propose a new type of quasi-Newton symmetric update using several secant equations in a least-squares sense. Our approach generalizes and unifies the design of quasi-Newton updates and satisfies provable robustness guarantees.
83 - Yue Yu , Ufuk Topcu 2021
A constrained optimization problem is primal infeasible if its constraints cannot be satisfied, and dual infeasible if the constraints of its dual problem cannot be satisfied. We propose a novel iterative method, named proportional-integral projected gradient method (PIPG), for detecting primal and dual infeasiblity in convex optimization with quadratic objective function and conic constraints. The iterates of PIPG either asymptotically provide a proof of primal or dual infeasibility, or asymptotically satisfy a set of primal-dual optimality conditions. Unlike existing methods, PIPG does not compute matrix inverse, which makes it better suited for large-scale and real-time applications. We demonstrate the application of PIPG in quasiconvex and mixed-integer optimization using examples in constrained optimal control.
The paper starts with a concise description of the recently developed semismooth* Newton method for the solution of general inclusions. This method is then applied to a class of variational inequalities of the second kind. As a result, one obtains an implementable algorithm exhibiting a local superlinear convergence. Thereafter we suggest several globally convergent hybrid algorithms in which one combines the semismooth* Newton method with selected splitting algorithms for the solution of monotone variational inequalities. Their efficiency is documented by extensive numerical experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا