Do you want to publish a course? Click here

Investigating a Hybrid Metaheuristic For Job Shop Rescheduling

103   0   0.0 ( 0 )
 Added by Uwe Aickelin
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

Previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. The main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. The building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (Phase I). Each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). We further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (Phase II). We use ten fixed solutions as our target and measure how well we cover these specific scenarios.



rate research

Read More

Artificial immune system can be used to generate schedules in changing environments and it has been proven to be more robust than schedules developed using a genetic algorithm. Good schedules can be produced especially when the number of the antigens is increased. However, an increase in the range of the antigens had somehow affected the fitness of the immune system. In this research, we are trying to improve the result of the system by rescheduling the same problem using the same method while at the same time maintaining the robustness of the schedules.
The Flexible Job Shop Scheduling Problem (FJSP) is a combinatorial problem that continues to be studied extensively due to its practical implications in manufacturing systems and emerging new variants, in order to model and optimize more complex situations that reflect the current needs of the industry better. This work presents a new meta-heuristic algorithm called GLNSA (Global-local neighborhood search algorithm), in which the neighborhood concepts of a cellular automaton are used, so that a set of leading solutions called smart_cells generates and shares information that helps to optimize instances of FJSP. The GLNSA algorithm is complemented with a tabu search that implements a simplified version of the Nopt1 neighborhood defined in [1] to complement the optimization task. The experiments carried out show a satisfactory performance of the proposed algorithm, compared with other results published in recent algorithms and widely cited in the specialized bibliography, using 86 test problems, improving the optimal result reported in previous works in two of them.
137 - Andrea G. Citrolo 2013
The hydrophobic-polar (HP) model has been widely studied in the field of protein structure prediction (PSP) both for theoretical purposes and as a benchmark for new optimization strategies. In this work we introduce a new heuristics based on Ant Colony Optimization (ACO) and Markov Chain Monte Carlo (MCMC) that we called Hybrid Monte Carlo Ant Colony Optimization (HMCACO). We describe this method and compare results obtained on well known HP instances in the 3 dimensional cubic lattice to those obtained with standard ACO and Simulated Annealing (SA). All methods were implemented using an unconstrained neighborhood and a modified objective function to prevent the creation of overlapping walks. Results show that our methods perform better than the other heuristics in all benchmark instances.
Priority dispatching rule (PDR) is widely used for solving real-world Job-shop scheduling problem (JSSP). However, the design of effective PDRs is a tedious task, requiring a myriad of specialized knowledge and often delivering limited performance. In this paper, we propose to automatically learn PDRs via an end-to-end deep reinforcement learning agent. We exploit the disjunctive graph representation of JSSP, and propose a Graph Neural Network based scheme to embed the states encountered during solving. The resulting policy network is size-agnostic, effectively enabling generalization on large-scale instances. Experiments show that the agent can learn high-quality PDRs from scratch with elementary raw features, and demonstrates strong performance against the best existing PDRs. The learned policies also perform well on much larger instances that are unseen in training.
178 - Jingpeng Li , Uwe Aickelin 2008
A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurses assignment. Unlike our previous work that used Gas to implement implicit learning, the learning in the proposed algorithm is explicit, ie. Eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated, ie in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا