Do you want to publish a course? Click here

Fortified Test Functions for Global Optimization and the Power of Multiple Runs

64   0   0.0 ( 0 )
 Added by Charles Jekel
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value. That is all local optima are also global optima. This paper suggests that such functions are easily fortified by adding a localized bump at the location of one of the optima, making the functions more difficult to optimize due to the multiple competing local optima. This process is illustrated here for the Branin-Hoo function, which has three global optima. We use the popular Python SciPy differential evolution (DE) optimizer for the illustration. DE also allows the use of the gradient-based BFGS local optimizer for final convergence. By making a large number of replicate runs we establish the probability of reaching a global optimum with the original and fortified Branin-Hoo. With the original function we find 100% probability of success with a moderate number of function evaluations. With the fortified version, the probability of getting trapped in a non-global optimum could be made small only with a much larger number of function evaluations. However, since the probability of ending up at the global optimum is usually 1/3 or more, it may be beneficial to perform multiple inexpensive optimizations rather than one expensive optimization. Then the probability of one of them hitting the global optimum can be made high. We found that for the most challenging global optimum, multiple runs reduced substantially the extra cost for the fortified function compared to the original Branin-Hoo.

rate research

Read More

Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value, making them all global optima. It is easy to make them more challenging by fortifying them via adding a localized bump at the location of one of the optima. In previous work the authors illustrated this for the Branin-Hoo function and the popular differential evolution algorithm, showing that the fortified Branin-Hoo required an order of magnitude more function evaluations. This paper examines the effect of fortifying the Branin-Hoo function on surrogate-based optimization, which usually proceeds by adaptive sampling. Two algorithms are considered. The EGO algorithm, which is based on a Gaussian process (GP) and an algorithm based on radial basis functions (RBF). EGO is found to be more frugal in terms of the number of required function evaluations required to identify the correct basin, but it is expensive to run on a desktop, limiting the number of times the runs could be repeated to establish sound statistics on the number of required function evaluations. The RBF algorithm was cheaper to run, providing more sound statistics on performance. A four-dimensional version of the Branin-Hoo function was introduced in order to assess the effect of dimensionality. It was found that the difference between the ordinary function and the fortified one was much more pronounced for the four-dimensional function compared to the two dimensional one.
In this paper, the problem of safe global maximization (it should not be confused with robust optimization) of expensive noisy black-box functions satisfying the Lipschitz condition is considered. The notion safe means that the objective function $f(x)$ during optimization should not violate a safety threshold, for instance, a certain a priori given value $h$ in a maximization problem. Thus, any new function evaluation (possibly corrupted by noise) must be performed at safe points only, namely, at points $y$ for which it is known that the objective function $f(y) > h$. The main difficulty here consists in the fact that the used optimization algorithm should ensure that the safety constraint will be satisfied at a point $y$ before evaluation of $f(y)$ will be executed. Thus, it is required both to determine the safe region $Omega$ within the search domain~$D$ and to find the global maximum within $Omega$. An additional difficulty consists in the fact that these problems should be solved in the presence of the noise. This paper starts with a theoretical study of the problem and it is shown that even though the objective function $f(x)$ satisfies the Lipschitz condition, traditional Lipschitz minorants and majorants cannot be used due to the presence of the noise. Then, a $delta$-Lipschitz framework and two algorithms using it are proposed to solve the safe global maximization problem. The first method determines the safe area within the search domain and the second one executes the global maximization over the found safe region. For both methods a number of theoretical results related to their functioning and convergence is established. Finally, numerical experiments confirming the reliability of the proposed procedures are performed.
The paper proves convergence to global optima for a class of distributed algorithms for nonconvex optimization in network-based multi-agent settings. Agents are permitted to communicate over a time-varying undirected graph. Each agent is assumed to possess a local objective function (assumed to be smooth, but possibly nonconvex). The paper considers algorithms for optimizing the sum function. A distributed algorithm of the consensus+innovations type is proposed which relies on first-order information at the agent level. Under appropriate conditions on network connectivity and the cost objective, convergence to the set of global optima is achieved by an annealing-type approach, with decaying Gaussian noise independently added into each agents update step. It is shown that the proposed algorithm converges in probability to the set of global minima of the sum function.
59 - Nadav Dym 2020
Quasi branch and bound is a recently introduced generalization of branch and bound, where lower bounds are replaced by a relaxed notion of quasi-lower bounds, required to be lower bounds only for sub-cubes containing a minimizer. This paper is devoted to studying the possible benefits of this approach, for the problem of minimizing a smooth function over a cube. This is accomplished by suggesting two quasi branch and bound algorithms, qBnB(2) and qBnB(3), that compare favorably with alternative branch and bound algorithms. The first algorithm we propose, qBnB(2), achieves second order convergence based only on a bound on second derivatives, without requiring calculation of derivatives. As such, this algorithm is suitable for derivative free optimization, for which typical algorithms such as Lipschitz optimization only have first order convergence and so suffer from limited accuracy due to the clustering problem. Additionally, qBnB(2) is provably more efficient than the second order Lipschitz gradient algorithm which does require exact calculation of gradients. The second algorithm we propose, qBnB(3), has third order convergence and finite termination. In contrast with BnB algorithms with similar guarantees who typically compute lower bounds via solving relatively time consuming convex optimization problems, calculation of qBnB(3) bounds only requires solving a small number of Newton iterations. Our experiments verify the potential of both these methods in comparison with state of the art branch and bound algorithms.
A novel derivative-free algorithm, optimization by moving ridge functions (OMoRF), for unconstrained and bound-constrained optimization is presented. This algorithm couples trust region methodologies with output-based dimension reduction to accelerate convergence of model-based optimization strategies. The dimension-reducing subspace is updated as the trust region moves through the function domain, allowing OMoRF to be applied to functions with no known global low-dimensional structure. Furthermore, its low computational requirement allows it to make rapid progress when optimizing high-dimensional functions. Its performance is examined on a set of test problems of moderate to high dimension and a high-dimensional design optimization problem. The results show that OMoRF compares favourably to other common derivative-free optimization methods, even for functions in which no underlying global low-dimensional structure is known.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا