No Arabic abstract
The machinery of the human brain -- analog, probabilistic, embodied -- can be characterized computationally, but what machinery confers what computational powers? Any such system can be abstractly cast in terms of two computational components: a finite state machine carrying out computational steps, whether via currents, chemistry, or mechanics; plus a set of allowable memory operations, typically formulated in terms of an information store that can be read from and written to, whether via synaptic change, state transition, or recurrent activity. Probing these mechanisms for their information content, we can capture the difference in computational power that various systems are capable of. Most human cognitive abilities, from perception to action to memory, are shared with other species; we seek to characterize those (few) capabilities that are ubiquitously present among humans and absent from other species. Three realms of formidable constraints -- a) measurable human cognitive abilities, b) measurable allometric anatomic brain characteristics, and c) measurable features of specific automata and formal grammars -- illustrate remarkably sharp restrictions on human abilities, unexpectedly confining human cognition to a specific class of automata (nested stack), which are markedly below Turing machines.
Multimodal fusion benefits disease diagnosis by providing a more comprehensive perspective. Developing algorithms is challenging due to data heterogeneity and the complex within- and between-modality associations. Deep-network-based data-fusion models have been developed to capture the complex associations and the performance in diagnosis has been improved accordingly. Moving beyond diagnosis prediction, evaluation of disease mechanisms is critically important for biomedical research. Deep-network-based data-fusion models, however, are difficult to interpret, bringing about difficulties for studying biological mechanisms. In this work, we develop an interpretable multimodal fusion model, namely gCAM-CCL, which can perform automated diagnosis and result interpretation simultaneously. The gCAM-CCL model can generate interpretable activation maps, which quantify pixel-level contributions of the input features. This is achieved by combining intermediate feature maps using gradient-based weights. Moreover, the estimated activation maps are class-specific, and the captured cross-data associations are interest/label related, which further facilitates class-specific analysis and biological mechanism analysis. We validate the gCAM-CCL model on a brain imaging-genetic study, and show gCAM-CCLs performed well for both classification and mechanism analysis. Mechanism analysis suggests that during task-fMRI scans, several object recognition related regions of interests (ROIs) are first activated and then several downstream encoding ROIs get involved. Results also suggest that the higher cognition performing group may have stronger neurotransmission signaling while the lower cognition performing group may have problem in brain/neuron development, resulting from genetic variations.
In 2006, during a meeting of a working group of scientists in La Jolla, California at The Neurosciences Institute (NSI), Gerald Edelman described a roadmap towards the creation of a Conscious Artifact. As far as I know, this roadmap was not published. However, it did shape my thinking and that of many others in the years since that meeting. This short paper, which is based on my notes taken during the meeting, describes the key steps in this roadmap. I believe it is as groundbreaking today as it was more than 15 years ago.
We present the results of two tests where a sample of human participants were asked to make judgements about the conceptual combinations {it The Animal Acts} and {it The Animal eats the Food}. Both tests significantly violate the Clauser-Horne-Shimony-Holt version of Bell inequalities (`CHSH inequality), thus exhibiting manifestly non-classical behaviour due to the meaning connection between the individual concepts that are combined. We then apply a quantum-theoretic framework which we developed for any Bell-type situation and represent empirical data in complex Hilbert space. We show that the observed violations of the CHSH inequality can be explained as a consequence of a strong form of `quantum entanglement between the component conceptual entities in which both the state and measurements are entangled. We finally observe that a quantum model in Hilbert space can be elaborated in these Bell-type situations even when the CHSH violation exceeds the known `Cirelson bound, in contrast to a widespread belief. These findings confirm and strengthen the results we recently obtained in a variety of cognitive tests and document and image retrieval operations on the same conceptual combinations.
Mathematical approaches to modeling the mind since the 1950s are reviewed. Difficulties faced by these approaches are related to the fundamental incompleteness of logic discovered by K. Godel. A recent mathematical advancement, dynamic logic (DL) overcame these past difficulties. DL is described conceptually and related to neuroscience, psychology, cognitive science, and philosophy. DL models higher cognitive functions: concepts, emotions, instincts, understanding, imagination, intuition, consciousness. DL is related to the knowledge instinct that drives our understanding of the world and serves as a foundation for higher cognitive functions. Aesthetic emotions and perception of beauty are related to everyday functioning of the mind. The article reviews mechanisms of human symbolic ability, language and cognition, joint evolution of the mind, consciousness, and cultures. It touches on a manifold of aesthetic emotions in music, their cognitive function, origin, and evolution. The article concentrates on elucidating the first principles and reviews aspects of the theory proven in laboratory research.
The social brain hypothesis postulates the increasing complexity of social interactions as a driving force for the evolution of cognitive abilities. Whereas dyadic and triadic relations play a basic role in defining social behaviours and pose many challenges for the social brain, individuals in animal societies typically belong to relatively large networks. How the structure and dynamics of these networks also contribute to the evolution of cognition, and vice versa, is less understood. Here we review how collective phenomena can occur in systems where social agents do not require sophisticated cognitive skills, and how complex networks can grow from simple probabilistic rules, or even emerge from the interaction between agents and their environment, without explicit social factors. We further show that the analysis of social networks can be used to develop good indicators of social complexity beyond the individual or dyadic level. We also discuss the types of challenges that the social brain must cope with in structured groups, such as higher information fluxes, originating from individuals playing different roles in the network, or dyadic contacts of widely varying durations and frequencies. We discuss the relevance of these ideas for primates and other animals societies.