ﻻ يوجد ملخص باللغة العربية
Reinforcement learning (RL) agents in human-computer interactions applications require repeated user interactions before they can perform well. To address this cold start problem, we propose a novel approach of using cognitive models to pre-train RL agents before they are applied to real users. After briefly reviewing relevant cognitive models, we present our general methodological approach, followed by two case studies from our previous and ongoing projects. We hope this position paper stimulates conversations between RL, HCI, and cognitive science researchers in order to explore the full potential of the approach.
Deep reinforcement learning has generated superhuman AI in competitive games such as Go and StarCraft. Can similar learning techniques create a superior AI teammate for human-machine collaborative games? Will humans prefer AI teammates that improve o
Reinforcement learning for embodied agents is a challenging problem. The accumulated reward to be optimized is often a very rugged function, and gradient methods are impaired by many local optimizers. We demonstrate, in an experimental setting, that
We introduce a new recurrent agent architecture and associated auxiliary losses which improve reinforcement learning in partially observable tasks requiring long-term memory. We employ a temporal hierarchy, using a slow-ticking recurrent core to allo
It is a widely accepted principle that software without tests has bugs. Testing reinforcement learning agents is especially difficult because of the stochastic nature of both agents and environments, the complexity of state-of-the-art models, and the
Psychlab is a simulated psychology laboratory inside the first-person 3D game world of DeepMind Lab (Beattie et al. 2016). Psychlab enables implementations of classical laboratory psychological experiments so that they work with both human and artifi