ترغب بنشر مسار تعليمي؟ اضغط هنا

Implementation of Interior-point Methods for LP based on Krylov Subspace Iterative Solvers with Inner-iteration Preconditioning

67   0   0.0 ( 0 )
 نشر من قبل Keiichi Morikuni
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We apply novel inner-iteration preconditioned Krylov subspace methods to the interior-point algorithm for linear programming (LP). Inner-iteration preconditioners recently proposed by Morikuni and Hayami enable us to overcome the severe ill-conditioning of linear equations solved in the final phase of interior-point iterations. The Krylov subspace methods do not suffer from rank-deficiency and therefore no preprocessing is necessary even if rows of the constraint matrix are not linearly independent. By means of these methods, a new interior-point recurrence is proposed in order to omit one matrix-vector product at each step. Extensive numerical experiments are conducted over diverse instances of 138 LP problems including the Netlib, QAPLIB, Mittelmann and Atomizer Basis Pursuit collections. The largest problem has 434,580 unknowns. It turns out that our implementation is more robust than the standard public domain solvers SeDuMi (Self-Dual Minimization), SDPT3 (Semidefinite Programming Toh-Todd-Tutuncu) and the LSMR iterative solver in PDCO (Primal-Dual Barrier Method for Convex Objectives) without increasing CPU time. The proposed interior-point method based on iterative solvers succeeds in solving a fairly large number of LP instances from benchmark libraries under the standard stopping criteria. The work also presents a fairly extensive benchmark test for several renowned solvers including direct and iterative solvers.

قيم البحث

اقرأ أيضاً

51 - Jayanta Mandi , Tias Guns 2020
Solving optimization problems is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy or sto ck prices. Machine learning (ML) models, especially neural networks, are increasingly being used to estimate these coefficients in a data-driven way. Hence, end-to-end predict-and-optimize approaches, which consider how effective the predicted values are to solve the optimization problem, have received increasing attention. In case of integer linear programming problems, a popular approach to overcome their non-differentiabilty is to add a quadratic penalty term to the continuous relaxation, such that results from differentiating over quadratic programs can be used. Instead we investigate the use of the more principled logarithmic barrier term, as widely used in interior point solvers for linear programming. Specifically, instead of differentiating the KKT conditions, we consider the homogeneous self-dual formulation of the LP and we show the relation between the interior point step direction and corresponding gradients needed for learning. Finally our empirical experiments demonstrate our approach performs as good as if not better than the state-of-the-art QPTL (Quadratic Programming task loss) formulation of Wilder et al. and SPO approach of Elmachtoub and Grigas.
105 - Kirk M. Soodhalter 2021
Subspace recycling iterative methods and other subspace augmentation schemes are a successful extension to Krylov subspace methods in which a Krylov subspace is augmented with a fixed subspace spanned by vectors deemed to be helpful in accelerating c onvergence or conveying knowledge of the solution. Recently, a survey was published, in which a framework describing the vast majority of such methods was proposed [Soodhalter et al, GAMM-Mitt. 2020]. In many of these methods, the Krylov subspace is one generated by the system matrix composed with a projector that depends on the augmentation space. However, it is not a requirement that a projected Krylov subspace be used. There are augmentation methods built on using Krylov subspaces generated by the original system matrix, and these methods also fit into the general framework. In this note, we observe that one gains implementation benefits by considering such augmentation methods with unprojected Krylov subspaces in the general framework. We demonstrate this by applying the idea to the R$^3$GMRES method proposed in [Dong et al. ETNA 2014] to obtain a simplified implementation and to connect that algorithm to early augmentation schemes based on flexible preconditioning [Saad. SIMAX 1997].
We provide a condition-based analysis of two interior-point methods for unconstrained geometric programs, a class of convex programs that arise naturally in applications including matrix scaling, matrix balancing, and entropy maximization. Our condit ion numbers are natural geometric quantities associated with the Newton polytope of the geometric program, and lead to diameter bounds on approximate minimizers. We also provide effective bounds on the condition numbers both in general and under combinatorial assumptions on the Newton polytope. In this way, we generalize the iteration complexity of recent interior-point methods for matrix scaling and matrix balancing. Recently, there has been much work on algorithms for certain optimization problems on Lie groups, known as capacity and scaling problems. For commutative groups, these problems reduce to unconstrained geometric programs, which serves as a particular source of motivation for our work.
Recent advances in open source interior-point optimization methods and power system related software have provided researchers and educators with the necessary platform for simulating and optimizing power networks with unprecedented convenience. With in the Matpower software platform a combination of several different interior point optimization methods are provided and four different optimal power flow (OPF) formulations are recently available: the Polar-Power, Polar-Current, Cartesian-Power, and Cartesian-Current. The robustness and reliability of interior-point methods for different OPF formulations for minimizing the generation cost starting from different initial guesses, for a wide range of networks provided in the Matpower library ranging from 1951 buses to 193000 buses, will be investigated. Performance profiles are presented for iteration counts, overall time, and memory consumption, revealing the most reliable optimization method for the particular metric.
107 - Yair Carmon , John C. Duchi 2018
We provide convergence rates for Krylov subspace solutions to the trust-region and cubic-regularized (nonconvex) quadratic problems. Such solutions may be efficiently computed by the Lanczos method and have long been used in practice. We prove error bounds of the form $1/t^2$ and $e^{-4t/sqrt{kappa}}$, where $kappa$ is a condition number for the problem, and $t$ is the Krylov subspace order (number of Lanczos iterations). We also provide lower bounds showing that our analysis is sharp.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا