Do you want to publish a course? Click here

A Preconditioned Difference of Convex Algorithm for Truncated Quadratic Regularization with Application to Imaging

63   0   0.0 ( 0 )
 Added by Hongpeng Sun Dr.
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We consider the minimization problem with the truncated quadratic regularization with gradient operator, which is a nonsmooth and nonconvex problem. We cooperated the classical preconditioned iterations for linear equations into the nonlinear difference of convex functions algorithms with extrapolation. Especially, our preconditioned framework can deal with the large linear system efficiently which is usually expensive for computations. Global convergence is guaranteed and local linear convergence rate is given based on the analysis of the Kurdyka-L ojasiewicz exponent of the minimization functional. The proposed algorithm with preconditioners turns out to be very efficient for image restoration and is also appealing for image segmentation.



rate research

Read More

A constraint-reduced Mehrotra-Predictor-Corrector algorithm for convex quadratic programming is proposed. (At each iteration, such algorithms use only a subset of the inequality constraints in constructing the search direction, resulting in CPU savings.) The proposed algorithm makes use of a regularization scheme to cater to cases where the reduced constraint matrix is rank deficient. Global and local convergence properties are established under arbitrary working-set selection rules subject to satisfaction of a general condition. A modified active-set identification scheme that fulfills this condition is introduced. Numerical tests show great promise for the proposed algorithm, in particular for its active-set identification scheme. While the focus of the present paper is on dense systems, application of the main ideas to large sparse systems is briefly discussed.
A framework is proposed for solving general convex quadratic programs (CQPs) from an infeasible starting point by invoking an existing feasible-start algorithm tailored for inequality-constrained CQPs. The central tool is an exact penalty function scheme equipped with a penalty-parameter updating rule. The feasible-start algorithm merely has to satisfy certain general requirements, and so is the updating rule. Under mild assumptions, the framework is proved to converge on CQPs with both inequality and equality constraints and, at a negligible additional cost per iteration, produces an infeasibility certificate, together with a feasible point for an (approximately) $ell_1$-least relaxed feasible problem when the given problem does not have a feasible solution. The framework is applied to a feasible-start constraint-reduced interior-point algorithm previously proved to be highly performant on problems with many more constraints than variables (imbalanced). Numerical comparison with popular codes (SDPT3, SeDuMi, MOSEK) is reported on both randomly generated problems and support-vector machine classifier training problems. The results show that the former typically outperforms the latter on imbalanced problems.
We propose an extended primal-dual algorithm framework for solving a general nonconvex optimization model. This work is motivated by image reconstruction problems in a class of nonlinear imaging, where the forward operator can be formulated as a nonlinear convex function with respect to the reconstructed image. Using the proposed framework, we put forward six specific iterative schemes, and present their detailed mathematical explanation. We also establish the relationship to existing algorithms. Moreover, under proper assumptions, we analyze the convergence of the schemes for the general model when the optimal dual variable regarding the nonlinear operator is non-vanishing. As a representative, the image reconstruction for spectral computed tomography is used to demonstrate the effectiveness of the proposed algorithm framework. By special properties of the concrete problem, we further prove the convergence of these customized schemes when the optimal dual variable regarding the nonlinear operator is vanishing. Finally, the numerical experiments show that the proposed algorithm has good performance on image reconstruction for various data with non-standard scanning configuration.
For some typical and widely used non-convex half-quadratic regularization models and the Ambrosio-Tortorelli approximate Mumford-Shah model, based on the Kurdyka-L ojasiewicz analysis and the recent nonconvex proximal algorithms, we developed an efficient preconditioned framework aiming at the linear subproblems that appeared in the nonlinear alternating minimization procedure. Solving large-scale linear subproblems is always important and challenging for lots of alternating minimization algorithms. By cooperating the efficient and classical preconditioned iterations into the nonlinear and nonconvex optimization, we prove that only one or any finite times preconditioned iterations are needed for the linear subproblems without controlling the error as the usual inexact solvers. The proposed preconditioned framework can provide great flexibility and efficiency for dealing with linear subproblems and guarantee the global convergence of the nonlinear alternating minimization method simultaneously.
This work presents a new algorithm for empirical risk minimization. The algorithm bridges the gap between first- and second-order methods by computing a search direction that uses a second-order-type update in one subspace, coupled with a scaled steepest descent step in the orthogonal complement. To this end, partial curvature information is incorporated to help with ill-conditioning, while simultaneously allowing the algorithm to scale to the large problem dimensions often encountered in machine learning applications. Theoretical results are presented to confirm that the algorithm converges to a stationary point in both the strongly convex and nonconvex cases. A stochastic variant of the algorithm is also presented, along with corresponding theoretical guarantees. Numerical results confirm the strengths of the new approach on standard machine learning problems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا