ترغب بنشر مسار تعليمي؟ اضغط هنا

A new secant method for unconstrained optimization

103   0   0.0 ( 0 )
 نشر من قبل Stephen Vavasis
 تاريخ النشر 2008
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a gradient-based algorithm for unconstrained minimization derived from iterated linear change of basis. The new method is equivalent to linear conjugate gradient in the case of a quadratic objective function. In the case of exact line search it is a secant method. In practice, it performs comparably to BFGS and DFP and is sometimes more robust.



قيم البحث

اقرأ أيضاً

This paper describes an extension of the BFGS and L-BFGS methods for the minimization of a nonlinear function subject to errors. This work is motivated by applications that contain computational noise, employ low-precision arithmetic, or are subject to statistical noise. The classical BFGS and L-BFGS methods can fail in such circumstances because the updating procedure can be corrupted and the line search can behave erratically. The proposed method addresses these difficulties and ensures that the BFGS update is stable by employing a lengthening procedure that spaces out the points at which gradient differences are collected. A new line search, designed to tolerate errors, guarantees that the Armijo-Wolfe conditions are satisfied under most reasonable conditions, and works in conjunction with the lengthening procedure. The proposed methods are shown to enjoy convergence guarantees for strongly convex functions. Detailed implementations of the methods are presented, together with encouraging numerical results.
62 - Amit Verma , Mark Lewis 2021
Quadratic Unconstrained Binary Optimization models are useful for solving a diverse range of optimization problems. Constraints can be added by incorporating quadratic penalty terms into the objective, often with the introduction of slack variables n eeded for conversion of inequalities. This transformation can lead to a significant increase in the size and density of the problem. Herein, we propose an efficient approach for recasting inequality constraints that reduces the number of linear and quadratic variables. Experimental results illustrate the efficacy.
141 - A.S. Lewis , S.J. Wright 2015
We consider minimization of functions that are compositions of convex or prox-regular functions (possibly extended-valued) with smooth vector functions. A wide variety of important optimization problems fall into this framework. We describe an algori thmic framework based on a subproblem constructed from a linearized approximation to the objective and a regularization term. Properties of local solutions of this subproblem underlie both a global convergence result and an identification property of the active manifold containing the solution of the original problem. Preliminary computational results on both convex and nonconvex examples are promising.
The travelling salesman problem (TSP) of space trajectory design is complicated by its complex structure design space. The graph based tree search and stochastic seeding combinatorial approaches are commonly employed to tackle the time-dependent TSP due to their combinatorial nature. In this paper, a new continuous optimization strategy for the mixed integer combinatorial problem is proposed. The space trajectory combinatorial problem is tackled using continuous gradient based method. A continuous mapping technique is developed to map the integer type ID of targets on the sequence to a set of continuous design variables. Expected flyby targets are introduced as references and used as priori to select the candidate target to fly by. Bayesian based analysis is employed to model accumulated posterior of the sequence and a new objective function with quadratic form constraints is constructed. The new introduced auxiliary design variables of expected targets together with the original design variables are set to be optimized. A gradient based optimizer is used to search optimal sequence parameter. Performances of the proposed algorithm are demonstrated through a multiple debris rendezvous problem and a static TSP benchmark.
Although application examples of multilevel optimization have already been discussed since the 90s, the development of solution methods was almost limited to bilevel cases due to the difficulty of the problem. In recent years, in machine learning, Fr anceschi et al. have proposed a method for solving bilevel optimization problems by replacing their lower-level problems with the $T$ steepest descent update equations with some prechosen iteration number $T$. In this paper, we have developed a gradient-based algorithm for multilevel optimization with $n$ levels based on their idea and proved that our reformulation with $n T$ variables asymptotically converges to the original multilevel problem. As far as we know, this is one of the first algorithms with some theoretical guarantee for multilevel optimization. Numerical experiments show that a trilevel hyperparameter learning model considering data poisoning produces more stable prediction results than an existing bilevel hyperparameter learning model in noisy data settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا