Do you want to publish a course? Click here

Parameterless Gene-pool Optimal Mixing Evolutionary Algorithms

112   0   0.0 ( 0 )
 Added by Arkadiy Dushatskiy
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, i.e., dependencies between variables, can be key. In this article, we present the latest version of, and propose substantial enhancements to, the Gene-pool Optimal Mixing Evoutionary Algorithm (GOMEA): an EA explicitly designed to estimate and exploit linkage information. We begin by performing a large-scale search over several GOMEA design choices, to understand what matters most and obtain a generally best-performing version of the algorithm. Next, we introduce a novel version of GOMEA, called CGOMEA, where linkage-based variation is further improved by filtering solution mating based on conditional dependencies. We compare our latest version of GOMEA, the newly introduced CGOMEA, and another contending linkage-aware EA DSMGA-II in an extensive experimental evaluation, involving a benchmark set of 9 black-box problems that can only be solved efficiently if their inherent dependency structure is unveiled and exploited. Finally, in an attempt to make EAs more usable and resilient to parameter choices, we investigate the performance of different automatic population management schemes for GOMEA and CGOMEA, de facto making the EAs parameterless. Our results show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems, setting a new state of the art for the field.

rate research

Read More

In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions. Although the rigorous theoretical analysis on evolutionary algorithms has rapidly developed in recent two decades, it remains an open problem to theoretically understand the behaviors of evolutionary algorithms on time-linkage problems. This paper takes the first step to rigorously analyze evolutionary algorithms for time-linkage functions. Based on the basic OneMax function, we propose a time-linkage function where the first bit value of the last time step is integrated but has a different preference from the current first bit. We prove that with probability $1-o(1)$, randomized local search and $(1+1)$ EA cannot find the optimum, and with probability $1-o(1)$, $(mu+1)$ EA is able to reach the optimum.
In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods.
In the evolutionary computation research community, the performance of most evolutionary algorithms (EAs) depends strongly on their implemented coordinate system. However, the commonly used coordinate system is fixed and not well suited for different function landscapes, EAs thus might not search efficiently. To overcome this shortcoming, in this paper we propose a framework, named ACoS, to adaptively tune the coordinate systems in EAs. In ACoS, an Eigen coordinate system is established by making use of the cumulative population distribution information, which can be obtained based on a covariance matrix adaptation strategy and an additional archiving mechanism. Since the population distribution information can reflect the features of the function landscape to some extent, EAs in the Eigen coordinate system have the capability to identify the modality of the function landscape. In addition, the Eigen coordinate system is coupled with the original coordinate system, and they are selected according to a probability vector. The probability vector aims to determine the selection ratio of each coordinate system for each individual, and is adaptively updated based on the collected information from the offspring. ACoS has been applied to two of the most popular EA paradigms, i.e., particle swarm optimization (PSO) and differential evolution (DE), for solving 30 test functions with 30 and 50 dimensions at the 2014 IEEE Congress on Evolutionary Computation. The experimental studies demonstrate its effectiveness.
Previous theory work on multi-objective evolutionary algorithms considers mostly easy problems that are composed of unimodal objectives. This paper takes a first step towards a deeper understanding of how evolutionary algorithms solve multi-modal multi-objective problems. We propose the OneJumpZeroJump problem, a bi-objective problem whose single objectives are isomorphic to the classic jump functions benchmark. We prove that the simple evolutionary multi-objective optimizer (SEMO) cannot compute the full Pareto front. In contrast, for all problem sizes~$n$ and all jump sizes $k in [4..frac n2 - 1]$, the global SEMO (GSEMO) covers the Pareto front in $Theta((n-2k)n^{k})$ iterations in expectation. To improve the performance, we combine the GSEMO with two approaches, a heavy-tailed mutation operator and a stagnation detection strategy, that showed advantages in single-objective multi-modal problems. Runtime improvements of asymptotic order at least $k^{Omega(k)}$ are shown for both strategies. Our experiments verify the {substantial} runtime gains already for moderate problem sizes. Overall, these results show that the ideas recently developed for single-objective evolutionary algorithms can be effectively employed also in multi-objective optimization.
Benchmarking plays an important role in the development of novel search algorithms as well as for the assessment and comparison of contemporary algorithmic ideas. This paper presents common principles that need to be taken into account when considering benchmarking problems for constrained optimization. Current benchmark environments for testing Evolutionary Algorithms are reviewed in the light of these principles. Along with this line, the reader is provided with an overview of the available problem domains in the field of constrained benchmarking. Hence, the review supports algorithms developers with information about the merits and demerits of the available frameworks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا