Planning Multimodal Exploratory Actions for Online Robot Attribute Learning


الملخص بالإنكليزية

Robots frequently need to perceive object attributes, such as red, heavy, and empty, using multimodal exploratory actions, such as look, lift, and shake. Robot attribute learning algorithms aim to learn an observation model for each perceivable attribute given an exploratory action. Once the attribute models are learned, they can be used to identify attributes of new objects, answering questions, such as Is this object red and empty? Attribute learning and identification are being treated as two separate problems in the literature. In this paper, we first define a new problem called online robot attribute learning (On-RAL), where the robot works on attribute learning and attribute identification simultaneously. Then we develop an algorithm called information-theoretic reward shaping (ITRS) that actively addresses the trade-off between exploration and exploitation in On-RAL problems. ITRS was compared with competitive robot attribute learning baselines, and experimental results demonstrate ITRS superiority in learning efficiency and identification accuracy.

تحميل البحث