Do you want to publish a course? Click here

A Many-Objective Evolutionary Algorithm With Two Interacting Processes: Cascade Clustering and Reference Point Incremental Learning

64   0   0.0 ( 0 )
 Added by Mingde Zhao
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Researches have shown difficulties in obtaining proximity while maintaining diversity for many-objective optimization problems. Complexities of the true Pareto front pose challenges for the reference vector-based algorithms for their insufficient adaptability to the diverse characteristics with no priori. This paper proposes a many-objective optimization algorithm with two interacting processes: cascade clustering and reference point incremental learning (CLIA). In the population selection process based on cascade clustering (CC), using the reference vectors provided by the process based on incremental learning, the nondominated and the dominated individuals are clustered and sorted with different manners in a cascade style and are selected by round-robin for better proximity and diversity. In the reference vector adaptation process based on reference point incremental learning, using the feedbacks from the process based on CC, proper distribution of reference points is gradually obtained by incremental learning. Experimental studies on several benchmark problems show that CLIA is competitive compared with the state-of-the-art algorithms and has impressive efficiency and versatility using only the interactions between the two processes without incurring extra evaluations.



rate research

Read More

The infeasible parts of the objective space in difficult many-objective optimization problems cause trouble for evolutionary algorithms. This paper proposes a reference vector based algorithm which uses two interacting engines to adapt the reference vectors and to evolve the population towards the true Pareto Front (PF) s.t. the reference vectors are always evenly distributed within the current PF to provide appropriate guidance for selection. The current PF is tracked by maintaining an archive of undominated individuals, and adaptation of reference vectors is conducted with the help of another archive that contains layers of reference vectors corresponding to different density. Experimental results show the expected characteristics and competitive performance of the proposed algorithm TEEA.
Many-objective evolutionary algorithms (MOEAs), especially the decomposition-based MOEAs, have attracted wide attention in recent years. Recent studies show that a well designed combination of the decomposition method and the domination method can improve the performance ,i.e., convergence and diversity, of a MOEA. In this paper, a novel way of combining the decomposition method and the domination method is proposed. More precisely, a set of weight vectors is employed to decompose a given many-objective optimization problem(MaOP), and a hybrid method of the penalty-based boundary intersection function and dominance is proposed to compare local solutions within a subpopulation defined by a weight vector. A MOEA based on the hybrid method is implemented and tested on problems chosen from two famous test suites, i.e., DTLZ and WFG. The experimental results show that our algorithm is very competitive in dealing with MaOPs. Subsequently, our algorithm is extended to solve constraint MaOPs, and the constrained version of our algorithm also shows good performance in terms of convergence and diversity. These reveals that using dominance locally and combining it with the decomposition method can effectively improve the performance of a MOEA.
264 - Jinjin Xu , Yaochu Jin , Wenli Du 2021
Data-driven optimization has found many successful applications in the real world and received increased attention in the field of evolutionary optimization. Most existing algorithms assume that the data used for optimization is always available on a central server for construction of surrogates. This assumption, however, may fail to hold when the data must be collected in a distributed way and is subject to privacy restrictions. This paper aims to propose a federated data-driven evolutionary multi-/many-objective optimization algorithm. To this end, we leverage federated learning for surrogate construction so that multiple clients collaboratively train a radial-basis-function-network as the global surrogate. Then a new federated acquisition function is proposed for the central server to approximate the objective values using the global surrogate and estimate the uncertainty level of the approximated objective values based on the local models. The performance of the proposed algorithm is verified on a series of multi/many-objective benchmark problems by comparing it with two state-of-the-art surrogate-assisted multi-objective evolutionary algorithms.
155 - Ke Li , Renzhi Chen , Guangtao Fu 2017
When solving constrained multi-objective optimization problems, an important issue is how to balance convergence, diversity and feasibility simultaneously. To address this issue, this paper proposes a parameter-free constraint handling technique, two-archive evolutionary algorithm, for constrained multi-objective optimization. It maintains two co-evolving populations simultaneously: one, denoted as convergence archive, is the driving force to push the population toward the Pareto front; the other one, denoted as diversity archive, mainly tends to maintain the population diversity. In particular, to complement the behavior of the convergence archive and provide as much diversified information as possible, the diversity archive aims at exploring areas under-exploited by the convergence archive including the infeasible regions. To leverage the complementary effects of both archives, we develop a restricted mating selection mechanism that adaptively chooses appropriate mating parents from them according to their evolution status. Comprehensive experiments on a series of benchmark problems and a real-world case study fully demonstrate the competitiveness of our proposed algorithm, comparing to five state-of-the-art constrained evolutionary multi-objective optimizers.
An important challenge in reinforcement learning, including evolutionary robotics, is to solve multimodal problems, where agents have to act in qualitatively different ways depending on the circumstances. Because multimodal problems are often too difficult to solve directly, it is helpful to take advantage of staging, where a difficult task is divided into simpler subtasks that can serve as stepping stones for solving the overall problem. Unfortunately, choosing an effective ordering for these subtasks is difficult, and a poor ordering can reduce the speed and performance of the learning process. Here, we provide a thorough introduction and investigation of the Combinatorial Multi-Objective Evolutionary Algorithm (CMOEA), which avoids ordering subtasks by allowing all combinations of subtasks to be explored simultaneously. We compare CMOEA against two algorithms that can similarly optimize on multiple subtasks simultaneously: NSGA-II and Lexicase Selection. The algorithms are tested on a multimodal robotics problem with six subtasks as well as a maze navigation problem with a hundred subtasks. On these problems, CMOEA either outperforms or is competitive with the controls. Separately, we show that adding a linear combination over all objectives can improve the ability of NSGA-II to solve these multimodal problems. Lastly, we show that, in contrast to NSGA-II and Lexicase Selection, CMOEA can effectively leverage secondary objectives to achieve state-of-the-art results on the robotics task. In general, our experiments suggest that CMOEA is a promising, state-of-the-art algorithm for solving multimodal problems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا