No Arabic abstract
We agree with Lake and colleagues on their list of key ingredients for building humanlike intelligence, including the idea that model-based reasoning is essential. However, we favor an approach that centers on one additional ingredient: autonomy. In particular, we aim toward agents that can both build and exploit their own internal models, with minimal human hand-engineering. We believe an approach centered on autonomous learning has the greatest chance of success as we scale toward real-world complexity, tackling domains for which ready-made formal models are not available. Here we survey several important examples of the progress that has been made toward building autonomous agents with humanlike abilities, and highlight some outstanding challenges.
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
Deep reinforcement learning can match or exceed human performance in stable contexts, but with minor changes to the environment artificial networks, unlike humans, often cannot adapt. Humans rely on a combination of heuristics to simplify computational load and imagination to extend experiential learning to new and more challenging environments. Motivated by theories of the hierarchical organization of the human prefrontal networks, we have developed a model of hierarchical reinforcement learning that combines both heuristics and imagination into a stumbler-strategist network. We test performance of this network using Wythoffs game, a gridworld environment with a known optimal strategy. We show that a heuristic labeling of each position as hot or cold, combined with imagined play, both accelerates learning and promotes transfer to novel games, while also improving model interpretability.
Howes et al. Reply to Comment on Kinetic Simulations of Magnetized Turbulence in Astrophysical Plasmas arXiv:0711.4355
In this communication we refute a criticism concerning results of our work [3] that was presented in references [1] and [2].
The Abstraction and Reasoning Corpus (ARC) is a set of tasks that tests an agents ability to flexibly solve novel problems. While most ARC tasks are easy for humans, they are challenging for state-of-the-art AI. How do we build intelligent systems that can generalize to novel situations and understand human instructions in domains such as ARC? We posit that the answer may be found by studying how humans communicate to each other in solving these tasks. We present LARC, the Language-annotated ARC: a collection of natural language descriptions by a group of human participants, unfamiliar both with ARC and with each other, who instruct each other on how to solve ARC tasks. LARC contains successful instructions for 88% of the ARC tasks. We analyze the collected instructions as `natural programs, finding that most natural program concepts have analogies in typical computer programs. However, unlike how one precisely programs a computer, we find that humans both anticipate and exploit ambiguities to communicate effectively. We demonstrate that a state-of-the-art program synthesis technique, which leverages the additional language annotations, outperforms its language-free counterpart.