Do you want to publish a course? Click here

Genetic Algorithms for multimodal optimization: a review

79   0   0.0 ( 0 )
 Added by No\\'e Casas
 Publication date 2015
and research's language is English
 Authors Noe Casas




Ask ChatGPT about the research

In this article we provide a comprehensive review of the different evolutionary algorithm techniques used to address multimodal optimization problems, classifying them according to the nature of their approach. On the one hand there are algorithms that address the issue of the early convergence to a local optimum by differentiating the individuals of the population into groups and limiting their interaction, hence having each group evolve with a high degree of independence. On the other hand other approaches are based on directly addressing the lack of genetic diversity of the population by introducing elements into the evolutionary dynamics that promote new niches of the genotypical space to be explored. Finally, we study multi-objective optimization genetic algorithms, that handle the situations where multiple criteria have to be satisfied with no penalty for any of them. Very rich literature has arised over the years on these topics, and we aim at offering an overview of the most important techniques of each branch of the field.



rate research

Read More

Benchmarking plays an important role in the development of novel search algorithms as well as for the assessment and comparison of contemporary algorithmic ideas. This paper presents common principles that need to be taken into account when considering benchmarking problems for constrained optimization. Current benchmark environments for testing Evolutionary Algorithms are reviewed in the light of these principles. Along with this line, the reader is provided with an overview of the available problem domains in the field of constrained benchmarking. Hence, the review supports algorithms developers with information about the merits and demerits of the available frameworks.
We present a novel Auxiliary Truth enhanced Genetic Algorithm (GA) that uses logical or mathematical constraints as a means of data augmentation as well as to compute loss (in conjunction with the traditional MSE), with the aim of increasing both data efficiency and accuracy of symbolic regression (SR) algorithms. Our method, logic-guided genetic algorithm (LGGA), takes as input a set of labelled data points and auxiliary truths (ATs) (mathematical facts known a priori about the unknown function the regressor aims to learn) and outputs a specially generated and curated dataset that can be used with any SR method. Three key insights underpin our method: first, SR users often know simple ATs about the function they are trying to learn. Second, whenever an SR system produces a candidate equation inconsistent with these ATs, we can compute a counterexample to prove the inconsistency, and further, this counterexample may be used to augment the dataset and fed back to the SR system in a corrective feedback loop. Third, the value addition of these ATs is that their use in both the loss function and the data augmentation process leads to better rates of convergence, accuracy, and data efficiency. We evaluate LGGA against state-of-the-art SR tools, namely, Eureqa and TuringBot on 16 physics equations from The Feynman Lectures on Physics book. We find that using these SR tools in conjunction with LGGA results in them solving up to 30.0% more equations, needing only a fraction of the amount of data compared to the same tool without LGGA, i.e., resulting in up to a 61.9% improvement in data efficiency.
70 - Jian Yang , Yuhui Shi 2021
Population-based methods are often used to solve multimodal optimization problems. By combining niching or clustering strategy, the state-of-the-art approaches generally divide the population into several subpopulations to find multiple solutions for a problem at hand. However, these methods only guided by the fitness value during iterations, which are suffering from determining the number of subpopulations, i.e., the number of niche areas or clusters. To compensate for this drawback, this paper presents an Attention-oriented Brain Storm Optimization (ABSO) method that introduces the attention mechanism into a relatively new swarm intelligence algorithm, i.e., Brain Storm Optimization (BSO). By converting the objective space from the fitness space into attention space, the individuals are clustered and updated iteratively according to their salient values. Rather than converge to a single global optimum, the proposed method can guide the search procedure to converge to multiple salient solutions. The preliminary results show that the proposed method can locate multiple global and local optimal solutions of several multimodal benchmark functions. The proposed method needs less prior knowledge of the problem and can automatically converge to multiple optimums guided by the attention mechanism, which has excellent potential for further development.
123 - Nuno Alves 2010
Since their conception in 1975, Genetic Algorithms have been an extremely popular approach to find exact or approximate solutions to optimization and search problems. Over the last years there has been an enhanced interest in the field with related techniques, such as grammatical evolution, being developed. Unfortunately, work on developing genetic optimizations for low-end embedded architectures hasnt embraced the same enthusiasm. This short paper tackles that situation by demonstrating how genetic algorithms can be implemented in Arduino Duemilanove, a 16 MHz open-source micro-controller, with limited computation power and storage resources. As part of this short paper, the libraries used in this implementation are released into the public domain under a GPL license.
This work proposes a novel approach to evaluate and analyze the behavior of multi-population parallel genetic algorithms (PGAs) when running on a cluster of multi-core processors. In particular, we deeply study their numerical and computational behavior by proposing a mathematical model representing the observed performance curves. In them, we discuss the emerging mathematical descriptions of PGA performance instead of, e.g., individual isolated results subject to visual inspection, for a better understanding of the effects of the number of cores used (scalability), their migration policy (the migration gap, in this paper), and the features of the solved problem (type of encoding and problem size). The conclusions based on the real figures and the numerical models fitting them represent a fresh way of understanding their speed-up, running time, and numerical effort, allowing a comparison based on a few meaningful numeric parameters. This represents a set of conclusions beyond the usual textual lessons found in past works on PGAs. It can be used as an estimation tool for the future performance of the algorithms and a way of finding out their limitations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا