Do you want to publish a course? Click here

A brain basis of dynamical intelligence for AI and computational neuroscience

97   0   0.0 ( 0 )
 Added by Joseph Monaco
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.

rate research

Read More

Within computational neuroscience, informal interactions with modelers often reveal wildly divergent goals. In this opinion piece, we explicitly address the diversity of goals that motivate and ultimately influence modeling efforts. We argue that a wide range of goals can be meaningfully taken to be of highest importance. A simple informal survey conducted on the Internet confirmed the diversity of goals in the community. However, different priorities or preferences of individual researchers can lead to divergent model evaluation criteria. We propose that many disagreements in evaluating the merit of computational research stem from differences in goals and not from the mechanics of constructing, describing, and validating models. We suggest that authors state explicitly their goals when proposing models so that others can judge the quality of the research with respect to its stated goals.
Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems.
Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers and scientists to assess the quality whether particular pieces of software are ready for use in research.
We describe a mathematical models of grounded symbols in the brain. It also serves as a computational foundations for Perceptual Symbol System (PSS). This development requires new mathematical methods of dynamic logic (DL), which have overcome limitations of classical artificial intelligence and connectionist approaches. The paper discusses these past limitations, relates them to combinatorial complexity (exponential explosion) of algorithms in the past, and further to the static nature of classical logic. The new mathematical theory, DL, is a process-logic. A salient property of this process is evolution of vague representations into crisp. The paper first applies it to one aspect of PSS: situation learning from object perceptions. Then we relate DL to the essential PSS mechanisms of concepts, simulators, grounding, productivity, binding, recursion, and to the mechanisms relating grounded and amodal symbols. We discuss DL as a general theory describing the process of cognition on multiple levels of abstraction. We also discuss the implications of this theory for interactions between cognition and language, mechanisms of language grounding, and possible role of language in grounding abstract cognition. The developed theory makes experimental predictions, and will impact future theoretical developments in cognitive science, including knowledge representation, and perception-cognition interaction. Experimental neuroimaging evidence for DL and PSS in brain imaging is discussed as well as future research directions.
Fluid intelligence (Gf) has been defined as the ability to reason and solve previously unseen problems. Links to Gf have been found in magnetic resonance imaging (MRI) sequences such as functional MRI and diffusion tensor imaging. As part of the Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge 2019, we sought to predict Gf in children aged 9-10 from T1-weighted (T1W) MRIs. The data included atlas-aligned volumetric T1W images, atlas-defined segmented regions, age, and sex for 3739 subjects used for training and internal validation and 415 subjects used for external validation. We trained sex-specific convolutional neural net (CNN) and random forest models to predict Gf. For the convolutional model, skull-stripped volumetric T1W images aligned to the SRI24 brain atlas were used for training. Volumes of segmented atlas regions along with each subjects age were used to train the random forest regressor models. Performance was measured using the mean squared error (MSE) of the predictions. Random forest models achieved lower MSEs than CNNs. Further, the external validation data had a better MSE for females than males (60.68 vs. 80.74), with a combined MSE of 70.83. Our results suggest that predictive models of Gf from volumetric T1W MRI features alone may perform better when trained separately on male and female data. However, the performance of our models indicates that more information is necessary beyond the available data to make accurate predictions of Gf.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا