ﻻ يوجد ملخص باللغة العربية
The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
Within computational neuroscience, informal interactions with modelers often reveal wildly divergent goals. In this opinion piece, we explicitly address the diversity of goals that motivate and ultimately influence modeling efforts. We argue that a w
Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify th
Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investiga
We describe a mathematical models of grounded symbols in the brain. It also serves as a computational foundations for Perceptual Symbol System (PSS). This development requires new mathematical methods of dynamic logic (DL), which have overcome limita
Fluid intelligence (Gf) has been defined as the ability to reason and solve previously unseen problems. Links to Gf have been found in magnetic resonance imaging (MRI) sequences such as functional MRI and diffusion tensor imaging. As part of the Adol