ترغب بنشر مسار تعليمي؟ اضغط هنا

Testing Surrogate-Based Optimization with the Fortified Branin-Hoo Extended to Four Dimensions

54   0   0.0 ( 0 )
 نشر من قبل Charles Jekel
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value, making them all global optima. It is easy to make them more challenging by fortifying them via adding a localized bump at the location of one of the optima. In previous work the authors illustrated this for the Branin-Hoo function and the popular differential evolution algorithm, showing that the fortified Branin-Hoo required an order of magnitude more function evaluations. This paper examines the effect of fortifying the Branin-Hoo function on surrogate-based optimization, which usually proceeds by adaptive sampling. Two algorithms are considered. The EGO algorithm, which is based on a Gaussian process (GP) and an algorithm based on radial basis functions (RBF). EGO is found to be more frugal in terms of the number of required function evaluations required to identify the correct basin, but it is expensive to run on a desktop, limiting the number of times the runs could be repeated to establish sound statistics on the number of required function evaluations. The RBF algorithm was cheaper to run, providing more sound statistics on performance. A four-dimensional version of the Branin-Hoo function was introduced in order to assess the effect of dimensionality. It was found that the difference between the ordinary function and the fortified one was much more pronounced for the four-dimensional function compared to the two dimensional one.

قيم البحث

اقرأ أيضاً

Simulation models are widely used in practice to facilitate decision-making in a complex, dynamic and stochastic environment. But they are computationally expensive to execute and optimize, due to lack of analytical tractability. Simulation optimizat ion is concerned with developing efficient sampling schemes -- subject to a computational budget -- to solve such optimization problems. To mitigate the computational burden, surrogates are often constructed using simulation outputs to approximate the response surface of the simulation model. In this tutorial, we provide an up-to-date overview of surrogate-based methods for simulation optimization with continuous decision variables. Typical surrogates, including linear basis function models and Gaussian processes, are introduced. Surrogates can be used either as a local approximation or a global approximation. Depending on the choice, one may develop algorithms that converge to either a local optimum or a global optimum. Representative examples are presented for each category. Recent advances in large-scale computation for Gaussian processes are also discussed.
Gradient-free optimization methods, such as surrogate based optimization (SBO) methods, and genetic (GAs), or evolutionary (EAs) algorithms have gained popularity in the field of constrained optimization of expensive black-box functions. However, con straint-handling methods, by both classes of solvers, do not usually guarantee strictly feasible candidates during optimization. This can become an issue in applied engineering problems where design variables must remain feasible for simulations to not fail. We propose a constraint-handling method for computationally inexpensive constraint functions which guarantees strictly feasible candidates when using a surrogate-based optimizer. We compare our method to other SBO, GA/EA and gradient-based algorithms on two (relatively simple and relatively hard) analytical test functions, and an applied fully-resolved Computational Fluid Dynamics (CFD) problem concerned with optimization of an undulatory swimming of a fish-like body, and show that the proposed algorithm shows favorable results while guaranteeing feasible candidates.
Some popular functions used to test global optimization algorithms have multiple local optima, all with the same value. That is all local optima are also global optima. This paper suggests that such functions are easily fortified by adding a localize d bump at the location of one of the optima, making the functions more difficult to optimize due to the multiple competing local optima. This process is illustrated here for the Branin-Hoo function, which has three global optima. We use the popular Python SciPy differential evolution (DE) optimizer for the illustration. DE also allows the use of the gradient-based BFGS local optimizer for final convergence. By making a large number of replicate runs we establish the probability of reaching a global optimum with the original and fortified Branin-Hoo. With the original function we find 100% probability of success with a moderate number of function evaluations. With the fortified version, the probability of getting trapped in a non-global optimum could be made small only with a much larger number of function evaluations. However, since the probability of ending up at the global optimum is usually 1/3 or more, it may be beneficial to perform multiple inexpensive optimizations rather than one expensive optimization. Then the probability of one of them hitting the global optimum can be made high. We found that for the most challenging global optimum, multiple runs reduced substantially the extra cost for the fortified function compared to the original Branin-Hoo.
In this article we develop a gradient-based algorithm for the solution of multiobjective optimization problems with uncertainties. To this end, an additional condition is derived for the descent direction in order to account for inaccuracies in the g radients and then incorporated in a subdivison algorithm for the computation of global solutions to multiobjective optimization problems. Convergence to a superset of the Pareto set is proved and an upper bound for the maximal distance to the set of substationary points is given. Besides the applicability to problems with uncertainties, the algorithm is developed with the intention to use it in combination with model order reduction techniques in order to efficiently solve PDE-constrained multiobjective optimization problems.
88 - Julien Mairal 2013
In this paper, we study optimization methods consisting of iteratively minimizing surrogates of an objective function. By proposing several algorithmic variants and simple convergence analyses, we make two main contributions. First, we provide a unif ied viewpoint for several first-order optimization techniques such as accelerated proximal gradient, block coordinate descent, or Frank-Wolfe algorithms. Second, we introduce a new incremental scheme that experimentally matches or outperforms state-of-the-art solvers for large-scale optimization problems typically arising in machine learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا