ترغب بنشر مسار تعليمي؟ اضغط هنا

A Cooperative Coordination Solver for Travelling Thief Problems

64   0   0.0 ( 0 )
 نشر من قبل Conrad Sanderson
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The travelling thief problem (TTP) is a representative of multi-component optimisation problems with interacting components. TTP combines the knapsack problem (KP) and the travelling salesman problem (TSP). A thief performs a cyclic tour through a set of cities, and pursuant to a collection plan, collects a subset of items into a rented knapsack with finite capacity. The aim is to maximise profit while minimising renting cost. Existing TTP solvers typically solve the KP and TSP components in an interleaved manner: the solution of one component is kept fixed while the solution of the other component is modified. This suggests low coordination between solving the two components, possibly leading to low quality TTP solutions. The 2-OPT heuristic is often used for solving the TSP component, which reverses a segment in the tour. Within TTP, 2-OPT does not take into account the collection plan, which can result in a lower objective value. This in turn can result in the tour modification to be rejected by a solver. We propose an expanded form of 2-OPT to change the collection plan in coordination with tour modification. Items regarded as less profitable and collected in cities located earlier in the reversed segment are substituted by items that tend to be more profitable and not collected in cities located later in the reversed segment. The collection plan is further changed through a modified form of the hill-climbing bit-flip search, where changes in the collection state are only permitted for boundary items, which are defined as lowest profitable collected items or highest profitable uncollected items. This restriction reduces the time spent on the KP component, allowing more tours to be evaluated by the TSP component within a time budget. The proposed approaches form the basis of a new cooperative coordination solver, which is shown to outperform several state-of-the-art TTP solvers.



قيم البحث

اقرأ أيضاً

The travelling thief problem (TTP) is a multi-component optimisation problem involving two interdependent NP-hard components: the travelling salesman problem (TSP) and the knapsack problem (KP). Recent state-of-the-art TTP solvers modify the underlyi ng TSP and KP solutions in an iterative and interleaved fashion. The TSP solution (cyclic tour) is typically changed in a deterministic way, while changes to the KP solution typically involve a random search, effectively resulting in a quasi-meandering exploration of the TTP solution space. Once a plateau is reached, the iterative search of the TTP solution space is restarted by using a new initial TSP tour. We propose to make the search more efficient through an adaptive surrogate model (based on a customised form of Support Vector Regression) that learns the characteristics of initial TSP tours that lead to good TTP solutions. The model is used to filter out non-promising initial TSP tours, in effect reducing the amount of time spent to find a good TTP solution. Experiments on a broad range of benchmark TTP instances indicate that the proposed approach filters out a considerable number of non-promising initial tours, at the cost of omitting only a small number of the best TTP solutions.
The Travelling Thief Problem (TTP) is a challenging combinatorial optimization problem that attracts many scholars. The TTP interconnects two well-known NP-hard problems: the Travelling Salesman Problem (TSP) and the 0-1 Knapsack Problem (KP). Increa singly algorithms have been proposed for solving this novel problem that combines two interdependent sub-problems. In this paper, TTP is investigated theoretically and empirically. An algorithm based on the score value calculated by our proposed formulation in picking items and sorting items in the reverse order in the light of the scoring value is proposed to solve the problem. Different approaches for solving the TTP are compared and analyzed; the experimental investigations suggest that our proposed approach is very efficient in meeting or beating current state-of-the-art heuristic solutions on a comprehensive set of benchmark TTP instances.
Problems of cooperation--in which agents seek ways to jointly improve their welfare--are ubiquitous and important. They can be found at scales ranging from our daily routines--such as driving on highways, scheduling meetings, and working collaborativ ely--to our global challenges--such as peace, commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. We see an opportunity for the field of artificial intelligence to explicitly focus effort on this class of problems, which we term Cooperative AI. The objective of this research would be to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems. Central goals include building machine agents with the capabilities needed for cooperation, building tools to foster cooperation in populations of (machine and/or human) agents, and otherwise conducting AI research for insight relevant to problems of cooperation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms. However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.
We consider the problem of zero-shot coordination - constructing AI agents that can coordinate with novel partners they have not seen before (e.g. humans). Standard Multi-Agent Reinforcement Learning (MARL) methods typically focus on the self-play (S P) setting where agents construct strategies by playing the game with themselves repeatedly. Unfortunately, applying SP naively to the zero-shot coordination problem can produce agents that establish highly specialized conventions that do not carry over to novel partners they have not been trained with. We introduce a novel learning algorithm called other-play (OP), that enhances self-play by looking for more robust strategies, exploiting the presence of known symmetries in the underlying problem. We characterize OP theoretically as well as experimentally. We study the cooperative card game Hanabi and show that OP agents achieve higher scores when paired with independently trained agents. In preliminary results we also show that our OP agents obtains higher average scores when paired with human players, compared to state-of-the-art SP agents.
In the path version of the Travelling Salesman Problem (Path-TSP), a salesman is looking for the shortest Hamiltonian path through a set of n cities. The salesman has to start his journey at a given city s, visit every city exactly once, and finally end his trip at another given city t. In this paper we identify a new polynomially solvable case of the Path-TSP where the distance matrix of the cities is a so-called Demidenko matrix. We identify a number of crucial combinatorial properties of the optimal solution, and we design a dynamic program with time complexity $O(n^6)$.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا