Do you want to publish a course? Click here

Self-Adaptive Surrogate-Assisted Covariance Matrix Adaptation Evolution Strategy

239   0   0.0 ( 0 )
 Added by Loshchilov Ilya
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

This paper presents a novel mechanism to adapt surrogate-assisted population-based algorithms. This mechanism is applied to ACM-ES, a recently proposed surrogate-assisted variant of CMA-ES. The resulting algorithm, saACM-ES, adjusts online the lifelength of the current surrogate model (the number of CMA-ES generations before learning a new surrogate) and the surrogate hyper-parameters. Both heuristics significantly improve the quality of the surrogate model, yielding a significant speed-up of saACM-ES compared to the ACM-ES and CMA-ES baselines. The empirical validation of saACM-ES on the BBOB-2012 noiseless testbed demonstrates the efficiency and the scalability w.r.t the problem dimension and the population size of the proposed approach, that reaches new best results on some of the benchmark problems.

rate research

Read More

66 - Yangjie Mei , Hao Wang 2021
Over the past decades, more and more methods gain a giant development due to the development of technology. Evolutionary Algorithms are widely used as a heuristic method. However, the budget of computation increases exponentially when the dimensions increase. In this paper, we will use the dimensionality reduction method Principal component analysis (PCA) to reduce the dimension during the iteration of Covariance Matrix Adaptation Evolution Strategy (CMA-ES), which is a good Evolutionary Algorithm that is presented as the numeric type and useful for different kinds of problems. We assess the performance of our new methods in terms of convergence rate on multi-modal problems from the Black-Box Optimization Benchmarking (BBOB) problem set and we also use the framework COmparing Continuous Optimizers (COCO) to see how the new method going and compare it to the other algorithms.
This paper addresses the development of a covariance matrix self-adaptation evolution strategy (CMSA-ES) for solving optimization problems with linear constraints. The proposed algorithm is referred to as Linear Constraint CMSA-ES (lcCMSA-ES). It uses a specially built mutation operator together with repair by projection to satisfy the constraints. The lcCMSA-ES evolves itself on a linear manifold defined by the constraints. The objective function is only evaluated at feasible search points (interior point method). This is a property often required in application domains such as simulation optimization and finite element methods. The algorithm is tested on a variety of different test problems revealing considerable results.
Evolution-based neural architecture search requires high computational resources, resulting in long search time. In this work, we propose a framework of applying the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to the neural architecture search problem called CMANAS, which achieves better results than previous evolution-based methods while reducing the search time significantly. The architectures are modelled using a normal distribution, which is updated using CMA-ES based on the fitness of the sampled population. We used the accuracy of a trained one shot model (OSM) on the validation data as a prediction of the fitness of an individual architecture to reduce the search time. We also used an architecture-fitness table (AF table) for keeping record of the already evaluated architecture, thus further reducing the search time. CMANAS finished the architecture search on CIFAR-10 with the top-1 test accuracy of 97.44% in 0.45 GPU day and on CIFAR-100 with the top-1 test accuracy of 83.24% for 0.6 GPU day on a single GPU. The top architectures from the searches on CIFAR-10 and CIFAR-100 were then transferred to ImageNet, achieving the top-5 accuracy of 92.6% and 92.1%, respectively.
Most existing multiobjetive evolutionary algorithms (MOEAs) implicitly assume that each objective function can be evaluated within the same period of time. Typically. this is untenable in many real-world optimization scenarios where evaluation of different objectives involves different computer simulations or physical experiments with distinct time complexity. To address this issue, a transfer learning scheme based on surrogate-assisted evolutionary algorithms (SAEAs) is proposed, in which a co-surrogate is adopted to model the functional relationship between the fast and slow objective functions and a transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective. Our experimental results on DTLZ and UF test suites demonstrate that the proposed algorithm is competitive for solving bi-objective optimization where objectives have non-uniform evaluation times.
The covariance matrix contains the complete information about the second-order expectation values of the mode quadratures (position and momentum operators) of the system. Due to its prominence in studies of continuous variable systems, most significantly Gaussian states, special emphasis is put on time evolution models that result in self-contained equations for the covariance matrix. So far, despite not being explicitly implied by this requirement, virtually all such models assume a so-called quadratic, or second-order case, in which the generator of the evolution is at most second-order in the mode quadratures. Here, we provide an explicit model of covariance matrix evolution of infinite order. Furthermore, we derive the solution, including stationary states, for a large subclass of proposed evolutions. Our findings challenge the contemporary understanding of covariance matrix dynamics and may give rise to new methods and improvements in quantum technologies employing continuous variable systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا