Do you want to publish a course? Click here

Discovering Representations for Black-box Optimization

375   0   0.0 ( 0 )
 Added by Adam Gaier
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The encoding of solutions in black-box optimization is a delicate, handcrafted balance between expressiveness and domain knowledge -- between exploring a wide variety of solutions, and ensuring that those solutions are useful. Our main insight is that this process can be automated by generating a dataset of high-performing solutions with a quality diversity algorithm (here, MAP-Elites), then learning a representation with a generative model (here, a Variational Autoencoder) from that dataset. Our second insight is that this representation can be used to scale quality diversity optimization to higher dimensions -- but only if we carefully mix solutions generated with the learned representation and those generated with traditional variation operators. We demonstrate these capabilities by learning an low-dimensional encoding for the inverse kinematics of a thousand joint planar arm. The results show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites, and that, once solved, the produced encoding can be used for rapid optimization of novel, but similar, tasks. The presented techniques not only scale up quality diversity algorithms to high dimensions, but show that black-box optimization encodings can be automatically learned, rather than hand designed.



rate research

Read More

29 - Peng Wang , Gang Xin , Yuwei Jiao 2021
In recent decades, with the emergence of numerous novel intelligent optimization algorithms, many optimization researchers have begun to look for a basic search mechanism for their schemes that provides a more essential explanation of their studies. This paper aims to study the basic mechanism of an algorithm for black-box optimization with quantum theory. To achieve this goal, the Schroedinger equation is employed to establish the relationship between the optimization problem and the quantum system, which makes it possible to study the dynamic search behaviors in the evolution process with quantum theory. Moreover, to explore the basic behavior of the optimization system, the optimization problem is assumed to be decomposed and approximated. Then, a multilevel approximation quantum dynamics model of the optimization algorithm is established, which provides a mathematical and physical framework for the analysis of the optimization algorithm. Correspondingly, the basic search behavior based on this model is derived, which is governed by quantum theory. Comparison experiments and analysis between different bare-bones algorithms confirm the existence of the quantum mechanic based basic search mechanism of the algorithm on black-box optimization.
212 - Ilya Loshchilov 2012
In this paper, we study the performance of IPOP-saACM-ES, recently proposed self-adaptive surrogate-assisted Covariance Matrix Adaptation Evolution Strategy. The algorithm was tested using restarts till a total number of function evaluations of $10^6D$ was reached, where $D$ is the dimension of the function search space. The experiments show that the surrogate model control allows IPOP-saACM-ES to be as robust as the original IPOP-aCMA-ES and outperforms the latter by a factor from 2 to 3 on 6 benchmark problems with moderate noise. On 15 out of 30 benchmark problems in dimension 20, IPOP-saACM-ES exceeds the records observed during BBOB-2009 and BBOB-2010.
With increasingly more hyperparameters involved in their training, machine learning systems demand a better understanding of hyperparameter tuning automation. This has raised interest in studies of provably black-box optimization, which is made more practical by better exploration mechanism implemented in algorithm design, managing the flux of both optimization and statistical errors. Prior efforts focus on delineating optimization errors, but this is deficient: black-box optimization algorithms can be inefficient without considering heterogeneity among reward samples. In this paper, we make the key delineation on the role of statistical uncertainty in black-box optimization, guiding a more efficient algorithm design. We introduce textit{optimum-statistical collaboration}, a framework of managing the interaction between optimization error flux and statistical error flux evolving in the optimization process. Inspired by this framework, we propose the texttt{VHCT} algorithms for objective functions with only local-smoothness assumptions. In theory, we prove our algorithm enjoys rate-optimal regret bounds; in experiments, we show the algorithm outperforms prior efforts in extensive settings.
We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.
This paper introduces a multi-level (m-lev) mechanism into Evolution Strategies (ESs) in order to address a class of global optimization problems that could benefit from fine discretization of their decision variables. Such problems arise in engineering and scientific applications, which possess a multi-resolution control nature, and thus may be formulated either by means of low-resolution variants (providing coarser approximations with presumably lower accuracy for the general problem) or by high-resolution controls. A particular scientific application concerns practical Quantum Control (QC) problems, whose targeted optimal controls may be discretized to increasingly higher resolution, which in turn carries the potential to obtain better control yields. However, state-of-the-art derivative-free optimization heuristics for high-resolution formulations nominally call for an impractically large number of objective function calls. Therefore, an effective algorithmic treatment for such problems is needed. We introduce a framework with an automated scheme to facilitate guided-search over increasingly finer levels of control resolution for the optimization problem, whose on-the-fly learned parameters require careful adaptation. We instantiate the proposed m-lev self-adaptive ES framework by two specific strategies, namely the classical elitist single-child (1+1)-ES and the non-elitist multi-child derandomized $(mu_W,lambda)$-sep-CMA-ES. We first show that the approach is suitable by simulation-based optimization of QC systems which were heretofore viewed as too complex to address. We also present a laboratory proof-of-concept for the proposed approach on a basic experimental QC system objective.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا