ﻻ يوجد ملخص باللغة العربية
Compact optimization algorithms are a class of Estimation of Distribution Algorithms (EDAs) characterized by extremely limited memory requirements (hence they are called compact). As all EDAs, compact algorithms build and update a probabilistic model of the distribution of solutions within the search space, as opposed to population-based algorithms that instead make use of an explicit population of solutions. In addition to that, to keep their memory consumption low, compact algorithms purposely employ simple probabilistic models that can be described with a small number of parameters. Despite their simplicity, compact algorithms have shown good performances on a broad range of benchmark functions and real-world problems. However, compact algorithms also come with some drawbacks, i.e. they tend to premature convergence and show poorer performance on non-separable problems. To overcome these limitations, here we investigate a possible algorithmic scheme obtained by combining compact algorithms with a non-disruptive restart mechanism taken from the literature, named Re-Sampled Inheritance (RI). The resulting compact algorithms with RI are tested on the CEC 2014 benchmark functions. The numerical results show on the one hand that the use of RI consistently enhances the performances of compact algorithms, still keeping a limited usage of memory. On the other hand, our experiments show that among the tested algorithms, the best performance is obtained by compact Differential Evolution with RI.
The performance of a deep neural network is heavily dependent on its architecture and various neural architecture search strategies have been developed for automated network architecture design. Recently, evolutionary neural architecture search (ENAS
Distributed Constraint Optimization Problems (DCOPs) are a widely studied class of optimization problems in which interaction between a set of cooperative agents are modeled as a set of constraints. DCOPs are NP-hard and significant effort has been d
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce
Tree projections provide a unifying framework to deal with most structural decomposition methods of constraint satisfaction problems (CSPs). Within this framework, a CSP instance is decomposed into a number of sub-problems, called views, whose soluti
Evolutionary Learning proceeds by evolving a population of classifiers, from which it generally returns (with some notable exceptions) the single best-of-run classifier as final result. In the meanwhile, Ensemble Learning, one of the most efficient a