ترغب بنشر مسار تعليمي؟ اضغط هنا

Explicit Superlinear Convergence of Broydens Method in Nonlinear Equations

62   0   0.0 ( 0 )
 نشر من قبل Dachao Lin
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we follow the recent works about the explicit superlinear convergence rate of quasi-Newton methods. We focus on classical Broydens methods for solving nonlinear equations and establish explicit (local) superlinear convergence if the initial parameter and approximate Jacobian is close enough to the solution. Our results show two natural trade-offs. The first one is between the superlinear convergence rate and the radius of the neighborhood at initialization. The second one is the balance of the initial distance with the solution and its Jacobian. Moreover, our analysis covers two original Broydens methods: Broydens good and bad methods. We discover the difference between them in the scope of local convergence region and the condition number dependence.



قيم البحث

اقرأ أيضاً

In this paper, we follow the work (Rodomanov and Nesterov 2021) to study quasi-Newton methods, which is based on the updating formulas from a certain subclass of the Broyden family. We focus on the common SR1 and BFGS quasi-Newton methods to establis h better explicit superlinear convergence. First, based on greedy quasi-Newton update in Rodomanov and Nesterovs work, which greedily selected the direction so as to maximize a certain measure of progress, we improve the linear convergence rate to a condition-number-free superlinear convergence rate, when applied with the well-known SR1 update, and BFGS update. Moreover, our results can also be applied to the inverse approximation of the SR1 update. Second, based on random update, that selects the direction randomly from any spherical symmetry distribution we show the same superlinear convergence rate established as above. Our analysis is closely related to the approximation of a given Hessian matrix, unconstrained quadratic objective, as well as the general strongly convex, smooth and strongly self-concordant functions.
In this paper, we conduct a convergence rate analysis of the augmented Lagrangian method with a practical relative error criterion designed in Eckstein and Silva [Math. Program., 141, 319--348 (2013)] for convex nonlinear programming problems. We sho w that under a mild local error bound condition, this method admits locally a Q-linear rate of convergence. More importantly, we show that the modulus of the convergence rate is inversely proportional to the penalty parameter. That is, an asymptotically superlinear convergence is obtained if the penalty parameter used in the algorithm is increasing to infinity, or an arbitrarily Q-linear rate of convergence can be guaranteed if the penalty parameter is fixed but it is sufficiently large. Besides, as a byproduct, the convergence, as well as the convergence rate, of the distance from the primal sequence to the solution set of the problem is obtained.
Broydens method, widely used in quantum chemistry electronic-structure calculations for the numerical solution of nonlinear equations in many variables, is applied in the context of the nuclear many-body problem. Examples include the unitary gas prob lem, the nuclear density functional theory with Skyrme functionals, and the nuclear coupled-cluster theory. The stability of the method, its ease of use, and its rapid convergence rates make Broydens method a tool of choice for large-scale nuclear structure calculations.
We consider the use of a curvature-adaptive step size in gradient-based iterative methods, including quasi-Newton methods, for minimizing self-concordant functions, extending an approach first proposed for Newtons method by Nesterov. This step size h as a simple expression that can be computed analytically; hence, line searches are not needed. We show that using this step size in the BFGS method (and quasi-Newton methods in the Broyden convex class other than the DFP method) results in superlinear convergence for strongly convex self-concordant functions. We present numerical experiments comparing gradient descent and BFGS methods using the curvature-adaptive step size to traditional methods on deterministic logistic regression problems, and t
Dynamical systems that are subject to continuous uncertain fluctuations can be modelled using Stochastic Differential Equations (SDEs). Controlling such systems results in solving path constrained SDEs. Broadly, these problems fall under the category of Stochastic Differential-Algebraic Equations (SDAEs). In this article, the focus is on combining ideas from the local theory of Differential-Algebraic Equations with that of Stochastic Differential Equations. The question of existence and uniqueness of the solution for SDAEs is addressed by using contraction mapping theorem in an appropriate Banach space to arrive at a sufficient condition. From the geometric point of view, a necessary condition is derived for the existence of the solution. It is observed that there exists a class of high index SDAEs for which there is no solution. Hence, computational methods to find approximate solution of high index equations are presented. The techniques are illustrated in form of algorithms with examples and numerical computations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا