ترغب بنشر مسار تعليمي؟ اضغط هنا

Steklov Regularization and Trajectory Methods for Univariate Global Optimization

57   0   0.0 ( 0 )
 نشر من قبل C. Yal\\c{c}{\\i}n Kaya
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a new regularization technique, using what we refer to as the Steklov regularization function, and apply this technique to devise an algorithm that computes a global minimizer of univariate coercive functions. First, we show that the Steklov regularization convexifies a given univariate coercive function. Then, by using the regularization parameter as the independent variable, a trajectory is constructed on the surface generated by the Steklov function. For monic quartic polynomials, we prove that this trajectory does generate a global minimizer. In the process, we derive some properties of quartic polynomials. Comparisons are made with a previous approach which uses a quadratic regularization function. We carry out numerical experiments to illustrate the working of the new method on polynomials of various degree as well as a non-polynomial function.

قيم البحث

اقرأ أيضاً

Autonomous exploration is an application of growing importance in robotics. A promising strategy is ergodic trajectory planning, whereby an agent spends in each area a fraction of time which is proportional to its probability information density func tion. In this paper, a decentralized ergodic multi-agent trajectory planning algorithm featuring limited communication constraints is proposed. The agents trajectories are designed by optimizing a weighted cost encompassing ergodicity, control energy and close-distance operation objectives. To solve the underlying optimal control problem, a second-order descent iterative method coupled with a projection operator in the form of an optimal feedback controller is used. Exhaustive numerical analyses show that the multi-agent solution allows a much more efficient exploration in terms of completion task time and control energy distribution by leveraging collaboration among agents.
The paper proves convergence to global optima for a class of distributed algorithms for nonconvex optimization in network-based multi-agent settings. Agents are permitted to communicate over a time-varying undirected graph. Each agent is assumed to p ossess a local objective function (assumed to be smooth, but possibly nonconvex). The paper considers algorithms for optimizing the sum function. A distributed algorithm of the consensus+innovations type is proposed which relies on first-order information at the agent level. Under appropriate conditions on network connectivity and the cost objective, convergence to the set of global optima is achieved by an annealing-type approach, with decaying Gaussian noise independently added into each agents update step. It is shown that the proposed algorithm converges in probability to the set of global minima of the sum function.
59 - Nadav Dym 2020
Quasi branch and bound is a recently introduced generalization of branch and bound, where lower bounds are replaced by a relaxed notion of quasi-lower bounds, required to be lower bounds only for sub-cubes containing a minimizer. This paper is devote d to studying the possible benefits of this approach, for the problem of minimizing a smooth function over a cube. This is accomplished by suggesting two quasi branch and bound algorithms, qBnB(2) and qBnB(3), that compare favorably with alternative branch and bound algorithms. The first algorithm we propose, qBnB(2), achieves second order convergence based only on a bound on second derivatives, without requiring calculation of derivatives. As such, this algorithm is suitable for derivative free optimization, for which typical algorithms such as Lipschitz optimization only have first order convergence and so suffer from limited accuracy due to the clustering problem. Additionally, qBnB(2) is provably more efficient than the second order Lipschitz gradient algorithm which does require exact calculation of gradients. The second algorithm we propose, qBnB(3), has third order convergence and finite termination. In contrast with BnB algorithms with similar guarantees who typically compute lower bounds via solving relatively time consuming convex optimization problems, calculation of qBnB(3) bounds only requires solving a small number of Newton iterations. Our experiments verify the potential of both these methods in comparison with state of the art branch and bound algorithms.
This paper investigates the problem of computing the equilibrium of competitive games, which is often modeled as a constrained saddle-point optimization problem with probability simplex constraints. Despite recent efforts in understanding the last-it erate convergence of extragradient methods in the unconstrained setting, the theoretical underpinnings of these methods in the constrained settings, especially those using multiplicative updates, remain highly inadequate, even when the objective function is bilinear. Motivated by the algorithmic role of entropy regularization in single-agent reinforcement learning and game theory, we develop provably efficient extragradient methods to find the quantal response equilibrium (QRE) -- which are solutions to zero-sum two-player matrix games with entropy regularization -- at a linear rate. The proposed algorithms can be implemented in a decentralized manner, where each player executes symmetric and multiplicative updates iteratively using its own payoff without observing the opponents actions directly. In addition, by controlling the knob of entropy regularization, the proposed algorithms can locate an approximate Nash equilibrium of the unregularized matrix game at a sublinear rate without assuming the Nash equilibrium to be unique. Our methods also lead to efficient policy extragradient algorithms for solving entropy-regularized zero-sum Markov games at a linear rate. All of our convergence rates are nearly dimension-free, which are independent of the size of the state and action spaces up to logarithm factors, highlighting the positive role of entropy regularization for accelerating convergence.
We present and mathematically analyze an online adjoint algorithm for the optimization of partial differential equations (PDEs). Traditional adjoint algorithms would typically solve a new adjoint PDE at each optimization iteration, which can be compu tationally costly. In contrast, an online adjoint algorithm updates the design variables in continuous-time and thus constantly makes progress towards minimizing the objective function. The online adjoint algorithm we consider is similar in spirit to the pseudo-time-stepping, one-shot method which has been previously proposed. Motivated by the application of such methods to engineering problems, we mathematically study the convergence of the online adjoint algorithm. The online adjoint algorithm relies upon a time-relaxed adjoint PDE which provides an estimate of the direction of steepest descent. The algorithm updates this estimate continuously in time, and it asymptotically converges to the exact direction of steepest descent as $t rightarrow infty$. We rigorously prove that the online adjoint algorithm converges to a critical point of the objective function for optimizing the PDE. Under appropriate technical conditions, we also prove a convergence rate for the algorithm. A crucial step in the convergence proof is a multi-scale analysis of the coupled system for the forward PDE, adjoint PDE, and the gradient descent ODE for the design variables.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا