ترغب بنشر مسار تعليمي؟ اضغط هنا

Black-box optimization benchmarking of IPOP-saACM-ES on the BBOB-2012 noisy testbed

124   0   0.0 ( 0 )
 نشر من قبل Loshchilov Ilya
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Ilya Loshchilov




اسأل ChatGPT حول البحث

In this paper, we study the performance of IPOP-saACM-ES, recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategy. The algorithm was tested using restarts till a total number of function evaluations of $10^6D$ was reached, where $D$ is the dimension of the function search space. The experiments show that the surrogate model control allows IPOP-saACM-ES to be as robust as the original IPOP-aCMA-ES and outperforms the latter by a factor from 2 to 3 on 6 benchmark problems with moderate noise. On 15 out of 30 benchmark problems in dimension 20, IPOP-saACM-ES exceeds the records observed during BBOB-2009 and BBOB-2010.

قيم البحث

اقرأ أيضاً

174 - Ilya Loshchilov 2012
In this paper, we study the performance of IPOP-saACM-ES and BIPOP-saACM-ES, recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategies. Both algorithms were tested using restarts till a total number of funct ion evaluations of $10^6D$ was reached, where $D$ is the dimension of the function search space. We compared surrogate-assisted algorithms with their surrogate-le
29 - Peng Wang , Gang Xin , Yuwei Jiao 2021
In recent decades, with the emergence of numerous novel intelligent optimization algorithms, many optimization researchers have begun to look for a basic search mechanism for their schemes that provides a more essential explanation of their studies. This paper aims to study the basic mechanism of an algorithm for black-box optimization with quantum theory. To achieve this goal, the Schroedinger equation is employed to establish the relationship between the optimization problem and the quantum system, which makes it possible to study the dynamic search behaviors in the evolution process with quantum theory. Moreover, to explore the basic behavior of the optimization system, the optimization problem is assumed to be decomposed and approximated. Then, a multilevel approximation quantum dynamics model of the optimization algorithm is established, which provides a mathematical and physical framework for the analysis of the optimization algorithm. Correspondingly, the basic search behavior based on this model is derived, which is governed by quantum theory. Comparison experiments and analysis between different bare-bones algorithms confirm the existence of the quantum mechanic based basic search mechanism of the algorithm on black-box optimization.
The encoding of solutions in black-box optimization is a delicate, handcrafted balance between expressiveness and domain knowledge -- between exploring a wide variety of solutions, and ensuring that those solutions are useful. Our main insight is tha t this process can be automated by generating a dataset of high-performing solutions with a quality diversity algorithm (here, MAP-Elites), then learning a representation with a generative model (here, a Variational Autoencoder) from that dataset. Our second insight is that this representation can be used to scale quality diversity optimization to higher dimensions -- but only if we carefully mix solutions generated with the learned representation and those generated with traditional variation operators. We demonstrate these capabilities by learning an low-dimensional encoding for the inverse kinematics of a thousand joint planar arm. The results show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites, and that, once solved, the produced encoding can be used for rapid optimization of novel, but similar, tasks. The presented techniques not only scale up quality diversity algorithms to high dimensions, but show that black-box optimization encodings can be automatically learned, rather than hand designed.
In this paper, the problem of safe global maximization (it should not be confused with robust optimization) of expensive noisy black-box functions satisfying the Lipschitz condition is considered. The notion safe means that the objective function $f( x)$ during optimization should not violate a safety threshold, for instance, a certain a priori given value $h$ in a maximization problem. Thus, any new function evaluation (possibly corrupted by noise) must be performed at safe points only, namely, at points $y$ for which it is known that the objective function $f(y) > h$. The main difficulty here consists in the fact that the used optimization algorithm should ensure that the safety constraint will be satisfied at a point $y$ before evaluation of $f(y)$ will be executed. Thus, it is required both to determine the safe region $Omega$ within the search domain~$D$ and to find the global maximum within $Omega$. An additional difficulty consists in the fact that these problems should be solved in the presence of the noise. This paper starts with a theoretical study of the problem and it is shown that even though the objective function $f(x)$ satisfies the Lipschitz condition, traditional Lipschitz minorants and majorants cannot be used due to the presence of the noise. Then, a $delta$-Lipschitz framework and two algorithms using it are proposed to solve the safe global maximization problem. The first method determines the safe area within the search domain and the second one executes the global maximization over the found safe region. For both methods a number of theoretical results related to their functioning and convergence is established. Finally, numerical experiments confirming the reliability of the proposed procedures are performed.
61 - Ilya Loshchilov 2014
We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of no n-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from $m$ direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to $O(mn)$, where $n$ is the number of decision variables. When $n$ is large (e.g., $n$ > 1000), even relatively small values of $m$ (e.g., $m=20,30$) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا