Do you want to publish a course? Click here

A Coevolutionary Variable Neighborhood Search Algorithm for Discrete Multitasking (CoVNS): Application to Community Detection over Graphs

141   0   0.0 ( 0 )
 Added by Eneko Osaba
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The main goal of the multitasking optimization paradigm is to solve multiple and concurrent optimization tasks in a simultaneous way through a single search process. For attaining promising results, potential complementarities and synergies between tasks are properly exploited, helping each other by virtue of the exchange of genetic material. This paper is focused on Evolutionary Multitasking, which is a perspective for dealing with multitasking optimization scenarios by embracing concepts from Evolutionary Computation. This work contributes to this field by presenting a new multitasking approach named as Coevolutionary Variable Neighborhood Search Algorithm, which finds its inspiration on both the Variable Neighborhood Search metaheuristic and coevolutionary strategies. The second contribution of this paper is the application field, which is the optimal partitioning of graph instances whose connections among nodes are directed and weighted. This paper pioneers on the simultaneous solving of this kind of tasks. Two different multitasking scenarios are considered, each comprising 11 graph instances. Results obtained by our method are compared to those issued by a parallel Variable Neighborhood Search and independent executions of the basic Variable Neighborhood Search. The discussion on such results support our hypothesis that the proposed method is a promising scheme for simultaneous solving community detection problems over graphs.



rate research

Read More

The Flexible Job Shop Scheduling Problem (FJSP) is a combinatorial problem that continues to be studied extensively due to its practical implications in manufacturing systems and emerging new variants, in order to model and optimize more complex situations that reflect the current needs of the industry better. This work presents a new meta-heuristic algorithm called GLNSA (Global-local neighborhood search algorithm), in which the neighborhood concepts of a cellular automaton are used, so that a set of leading solutions called smart_cells generates and shares information that helps to optimize instances of FJSP. The GLNSA algorithm is complemented with a tabu search that implements a simplified version of the Nopt1 neighborhood defined in [1] to complement the optimization task. The experiments carried out show a satisfactory performance of the proposed algorithm, compared with other results published in recent algorithms and widely cited in the specialized bibliography, using 86 test problems, improving the optimal result reported in previous works in two of them.
This paper presents a novel neural network design that learns the heuristic for Large Neighborhood Search (LNS). LNS consists of a destroy operator and a repair operator that specify a way to carry out the neighborhood search to solve the Combinatorial Optimization problems. The proposed approach in this paper applies a Hierarchical Recurrent Graph Convolutional Network (HRGCN) as a LNS heuristic, namely Dynamic Partial Removal, with the advantage of adaptive destruction and the potential to search across a large scale, as well as the context-awareness in both spatial and temporal perspective. This model is generalized as an efficient heuristic approach to different combinatorial optimization problems, especially to the problems with relatively tight constraints. We apply this model to vehicle routing problem (VRP) in this paper as an example. The experimental results show that this approach outperforms the traditional LNS heuristics on the same problem as well. The source code is available at href{https://github.com/water-mirror/DPR}{https://github.com/water-mirror/DPR}.
Transfer Optimization is an incipient research area dedicated to solving multiple optimization tasks simultaneously. Among the different approaches that can address this problem effectively, Evolutionary Multitasking resorts to concepts from Evolutionary Computation to solve multiple problems within a single search process. In this paper we introduce a novel adaptive metaheuristic algorithm to deal with Evolutionary Multitasking environments coined as Adaptive Transfer-guided Multifactorial Cellular Genetic Algorithm (AT-MFCGA). AT-MFCGA relies on cellular automata to implement mechanisms in order to exchange knowledge among the optimization problems under consideration. Furthermore, our approach is able to explain by itself the synergies among tasks that were encountered and exploited during the search, which helps us to understand interactions between related optimization tasks. A comprehensive experimental setup is designed to assess and compare the performance of AT-MFCGA to that of other renowned evolutionary multitasking alternatives (MFEA and MFEA-II). Experiments comprise 11 multitasking scenarios composed of 20 instances of 4 combinatorial optimization problems, yielding the largest discrete multitasking environment solved to date. Results are conclusive in regard to the superior quality of solutions provided by AT-MFCGA with respect to the rest of the methods, which are complemented by a quantitative examination of the genetic transferability among tasks throughout the search process.
115 - Shengcai Liu , Ke Tang , Xin Yao 2017
This paper studies improving solvers based on their past solving experiences, and focuses on improving solvers by offline training. Specifically, the key issues of offline training methods are discussed, and research belonging to this category but from different areas are reviewed in a unified framework. Existing training methods generally adopt a two-stage strategy in which selecting the training instances and training instances are treated in two independent phases. This paper proposes a new training method, dubbed LiangYi, which addresses these two issues simultaneously. LiangYi includes a training module for a population-based solver and an instance sampling module for updating the training instances. The idea behind LiangYi is to promote the population-based solver by training it (with the training module) to improve its performance on those instances (discovered by the sampling module) on which it performs badly, while keeping the good performances obtained by it on previous instances. An instantiation of LiangYi on the Travelling Salesman Problem is also proposed. Empirical results on a huge testing set containing 10000 instances showed LiangYi could train solvers that perform significantly better than the solvers trained by other state-of-the-art training method. Moreover, empirical investigation of the behaviours of LiangYi confirmed it was able to continuously improve the solver through training.
Large Neighborhood Search (LNS) is a combinatorial optimization heuristic that starts with an assignment of values for the variables to be optimized, and iteratively improves it by searching a large neighborhood around the current assignment. In this paper we consider a learning-based LNS approach for mixed integer programs (MIPs). We train a Neural Diving model to represent a probability distribution over assignments, which, together with an off-the-shelf MIP solver, generates an initial assignment. Formulating the subsequent search steps as a Markov Decision Process, we train a Neural Neighborhood Selection policy to select a search neighborhood at each step, which is searched using a MIP solver to find the next assignment. The policy network is trained using imitation learning. We propose a target policy for imitation that, given enough compute resources, is guaranteed to select the neighborhood containing the optimal next assignment amongst all possible choices for the neighborhood of a specified size. Our approach matches or outperforms all the baselines on five real-world MIP datasets with large-scale instances from diverse applications, including two production applications at Google. It achieves $2times$ to $37.8times$ better average primal gap than the best baseline on three of the datasets at large running times.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا