SIDE: I Infer the State I Want to Learn


الملخص بالإنكليزية

As one of the solutions to the Dec-POMDP problem, the value decomposition method has achieved good results recently. However, most value decomposition methods require the global state during training, but this is not feasible in some scenarios where the global state cannot be obtained. Therefore, we propose a novel value decomposition framework, named State Inference for value DEcomposition (SIDE), which eliminates the need to know the true state by simultaneously seeking solutions to the two problems of optimal control and state inference. SIDE can be extended to any value decomposition method, as well as other types of multi-agent algorithms in the case of Dec-POMDP. Based on the performance results of different algorithms in Starcraft II micromanagement tasks, we verified that SIDE can construct the current state that contributes to the reinforcement learning process based on past local observations.

تحميل البحث