Recent AI research in StarCraft II has diversified its extent from macroscopic approaches, such as training entire game sets to win a full game, to microscopic approaches, such as selecting game strategies or controlling units . Unlike other turn-based games, such as Chess or Go, an agent playing StarCraft II is unable to obtain the entire information of the game’s action space, and this feature hinders an agents’ long-term planning of a game. It is because the agent has to decide over thousands of options in a very short period of time. Especially, training an agent to control units in real-time combat situations is an active as well as a challenging area of RL research in StarCraft II due to its complexity. Many related and advanced research have been published since DeepMind’s work on creating AI players in StarCraft II using RL algorithms . It is supported by substantial research community-AI competitions on StarCraft II e-sport fans vitalizing the StarCraft II community and open Application Program Interfaces (APIs) for researchers to manipulate actual game environments.
StarCraft II, one of the representative games of the RTS genre, have become an important challenge in AI research. We expect that our training method can be used for training other advanced and tactical units by applying transfer learning in more complex minigame scenarios or full game maps. We present a new training method for spellcaster units that releases the limitation of StarCraft II AI research. As a result, the trained agents show winning rates of more than 85% in each scenario. The main idea is to train two Protoss spellcaster units under three newly designed minigames, each representing a unique spell usage scenario, to use ‘Force Field’ and ‘Psionic Storm’ effectively. Therefore, we suggest a training method for spellcaster units in StarCraft II by using the A3C algorithm. Despite the importance of the spellcaster units in combat, training methods to carefully control spellcasters have not been thoroughly considered in related studies due to the complexity. Among many other combat units, the spellcaster unit is one of the most significant components that greatly influences the combat results. During combat situations in StarCraft II, micro-controlling various combat units is crucial in order to win the game. This paper proposes a DRL -based training method for spellcaster units in StarCraft II, one of the most representative Real-Time Strategy (RTS) games.