No Arabic abstract
We evaluated the cognitive status of visually impaired patients referred to low vision rehabilitation (LVR) based on a standard cognitive battery and a new evaluation tool, named the COGEVIS, which can be used to assess patients with severe visual deficits. We studied patients aged 60 and above, referred to the LVR Hospital in Paris. Neurological and cognitive evaluations were performed in an expert memory center. Thirty-eight individuals, 17 women and 21 men with a mean age of 70.3 $pm$ 1.3 years and a mean visual acuity of 0.12 $pm$ 0.02, were recruited over a one-year period. Sixty-three percent of participants had normal cognitive status. Cognitive impairment was diagnosed in 37.5% of participants. The COGEVIS score cutoff point to screen for cognitive impairment was 24 (maximum score of 30) with a sensitivity of 66.7% and a specificity of 95%. Evaluation following 4 months of visual rehabilitation showed an improvement of Instrumental Activities of Daily Living (p = 0 004), National Eye Institute Visual Functioning Questionnaire (p = 0 035), and Montgomery-{AA}sberg Depression Rating Scale (p = 0 037). This study introduces a new short test to screen for cognitive impairment in visually impaired patients.
The machinery of the human brain -- analog, probabilistic, embodied -- can be characterized computationally, but what machinery confers what computational powers? Any such system can be abstractly cast in terms of two computational components: a finite state machine carrying out computational steps, whether via currents, chemistry, or mechanics; plus a set of allowable memory operations, typically formulated in terms of an information store that can be read from and written to, whether via synaptic change, state transition, or recurrent activity. Probing these mechanisms for their information content, we can capture the difference in computational power that various systems are capable of. Most human cognitive abilities, from perception to action to memory, are shared with other species; we seek to characterize those (few) capabilities that are ubiquitously present among humans and absent from other species. Three realms of formidable constraints -- a) measurable human cognitive abilities, b) measurable allometric anatomic brain characteristics, and c) measurable features of specific automata and formal grammars -- illustrate remarkably sharp restrictions on human abilities, unexpectedly confining human cognition to a specific class of automata (nested stack), which are markedly below Turing machines.
We present the results of two tests where a sample of human participants were asked to make judgements about the conceptual combinations {it The Animal Acts} and {it The Animal eats the Food}. Both tests significantly violate the Clauser-Horne-Shimony-Holt version of Bell inequalities (`CHSH inequality), thus exhibiting manifestly non-classical behaviour due to the meaning connection between the individual concepts that are combined. We then apply a quantum-theoretic framework which we developed for any Bell-type situation and represent empirical data in complex Hilbert space. We show that the observed violations of the CHSH inequality can be explained as a consequence of a strong form of `quantum entanglement between the component conceptual entities in which both the state and measurements are entangled. We finally observe that a quantum model in Hilbert space can be elaborated in these Bell-type situations even when the CHSH violation exceeds the known `Cirelson bound, in contrast to a widespread belief. These findings confirm and strengthen the results we recently obtained in a variety of cognitive tests and document and image retrieval operations on the same conceptual combinations.
The social brain hypothesis postulates the increasing complexity of social interactions as a driving force for the evolution of cognitive abilities. Whereas dyadic and triadic relations play a basic role in defining social behaviours and pose many challenges for the social brain, individuals in animal societies typically belong to relatively large networks. How the structure and dynamics of these networks also contribute to the evolution of cognition, and vice versa, is less understood. Here we review how collective phenomena can occur in systems where social agents do not require sophisticated cognitive skills, and how complex networks can grow from simple probabilistic rules, or even emerge from the interaction between agents and their environment, without explicit social factors. We further show that the analysis of social networks can be used to develop good indicators of social complexity beyond the individual or dyadic level. We also discuss the types of challenges that the social brain must cope with in structured groups, such as higher information fluxes, originating from individuals playing different roles in the network, or dyadic contacts of widely varying durations and frequencies. We discuss the relevance of these ideas for primates and other animals societies.
Multimodal fusion benefits disease diagnosis by providing a more comprehensive perspective. Developing algorithms is challenging due to data heterogeneity and the complex within- and between-modality associations. Deep-network-based data-fusion models have been developed to capture the complex associations and the performance in diagnosis has been improved accordingly. Moving beyond diagnosis prediction, evaluation of disease mechanisms is critically important for biomedical research. Deep-network-based data-fusion models, however, are difficult to interpret, bringing about difficulties for studying biological mechanisms. In this work, we develop an interpretable multimodal fusion model, namely gCAM-CCL, which can perform automated diagnosis and result interpretation simultaneously. The gCAM-CCL model can generate interpretable activation maps, which quantify pixel-level contributions of the input features. This is achieved by combining intermediate feature maps using gradient-based weights. Moreover, the estimated activation maps are class-specific, and the captured cross-data associations are interest/label related, which further facilitates class-specific analysis and biological mechanism analysis. We validate the gCAM-CCL model on a brain imaging-genetic study, and show gCAM-CCLs performed well for both classification and mechanism analysis. Mechanism analysis suggests that during task-fMRI scans, several object recognition related regions of interests (ROIs) are first activated and then several downstream encoding ROIs get involved. Results also suggest that the higher cognition performing group may have stronger neurotransmission signaling while the lower cognition performing group may have problem in brain/neuron development, resulting from genetic variations.
Human Augmentation (HA) spans several technical fields and methodological approaches, including Experimental Psychology, Human-Computer Interaction, Psychophysiology, and Artificial Intelligence. Augmentation involves various strategies for optimizing and controlling cognitive states, which requires an understanding of biological plasticity, dynamic cognitive processes, and models of adaptive systems. As an instructive lesson, we will explore a few HA-related concepts and outstanding issues. Next, we focus on inducing and controlling HA using experimental methods by introducing three techniques for HA implementation: learning augmentation, augmentation using physical media, and extended phenotype modeling. To conclude, we will review integrative approaches to augmentation, which transcend specific functions.