StarCraft II Build Order Optimization using Deep Reinforcement Learning and Monte-Carlo Tree Search


Abstract in English

The real-time strategy game of StarCraft II has been posed as a challenge for reinforcement learning by Googles DeepMind. This study examines the use of an agent based on the Monte-Carlo Tree Search algorithm for optimizing the build order in StarCraft II, and discusses how its performance can be improved even further by combining it with a deep reinforcement learning neural network. The experimental results accomplished using Monte-Carlo Tree Search achieves a score similar to a novice human player by only using very limited time and computational resources, which paves the way to achieving scores comparable to those of a human expert by combining it with the use of deep reinforcement learning.

Download