No Arabic abstract
A developmental disorder that severely damages communicative and social functions, the Autism Spectrum Disorder (ASD) also presents aspects related to mental rigidity, repetitive behavior, and difficulty in abstract reasoning. More, imbalances between excitatory and inhibitory brain states, in addition to cortical connectivity disruptions, are at the source of the autistic behavior. Our main goal consists in unveiling the way by which these local excitatory imbalances and/or long brain connections disruptions are linked to the above mentioned cognitive features. We developed a theoretical model based on Self-Organizing Maps (SOM), where a three-level artificial neural network qualitatively incorporates these kinds of alterations observed in brains of patients with ASD. Computational simulations of our model indicate that high excitatory states or long distance under-connectivity are at the origins of cognitive alterations, as difficulty in categorization and mental rigidity. More specifically, the enlargement of excitatory synaptic reach areas in a cortical map development conducts to low categorization (over-selectivity) and poor concepts formation. And, both the over-strengthening of local excitatory synapses and the long distance under-connectivity, although through distinct mechanisms, contribute to impaired categorization (under-selectivity) and mental rigidity. Our results indicate how, together, both local and global brain connectivity alterations give rise to spoiled cortical structures in distinct ways and in distinct cortical areas. These alterations would disrupt the codification of sensory stimuli, the representation of concepts and, thus, the process of categorization - by this way imposing serious limits to the mental flexibility and to the capacity of generalization in the autistic reasoning.
Artificial neural networks have diverged far from their early inspiration in neurology. In spite of their technological and commercial success, they have several shortcomings, most notably the need for a large number of training examples and the resulting computation resources required for iterative learning. Here we describe an approach to neurological network simulation, both architectural and algorithmic, that adheres more closely to established biological principles and overcomes some of the shortcomings of conventional networks.
In this paper we introduce a novel Salience Affected Artificial Neural Network (SANN) that models the way neuromodulators such as dopamine and noradrenaline affect neural dynamics in the human brain by being distributed diffusely through neocortical regions, allowing both salience signals to modulate cognition immediately, and one time learning to take place through strengthening entire patterns of activation at one go. We present a model that is capable of one-time salience tagging in a neural network trained to classify objects, and returns a salience response during classification (inference). We explore the effects of salience on learning via its effect on the activation functions of each node, as well as on the strength of weights between nodes in the network. We demonstrate that salience tagging can improve classification confidence for both the individual image as well as the class of images it belongs to. We also show that the computation impact of producing a salience response is minimal. This research serves as a proof of concept, and could be the first step towards introducing salience tagging into Deep Learning Networks and robotics.
There are several indications that brain is organized not on a basis of individual unreliable neurons, but on a micro-circuital scale providing Lego blocks employed to create complex architectures. At such an intermediate scale, the firing activity in the microcircuits is governed by collective effects emerging by the background noise soliciting spontaneous firing, the degree of mutual connections between the neurons, and the topology of the connections. We compare spontaneous firing activity of small populations of neurons adhering to an engineered scaffold with simulations of biologically plausible CMOS artificial neuron populations whose spontaneous activity is ignited by tailored background noise. We provide a full set of flexible and low-power consuming silicon blocks including neurons, excitatory and inhibitory synapses, and both white and pink noise generators for spontaneous firing activation. We achieve a comparable degree of correlation of the firing activity of the biological neurons by controlling the kind and the number of connection among the silicon neurons. The correlation between groups of neurons, organized as a ring of four distinct populations connected by the equivalent of interneurons, is triggered more effectively by adding multiple synapses to the connections than increasing the number of independent point-to-point connections. The comparison between the biological and the artificial systems suggests that a considerable number of synapses is active also in biological populations adhering to engineered scaffolds.
Neuroscientists are actively pursuing high-precision maps, or graphs, consisting of networks of neurons and connecting synapses in mammalian and non-mammalian brains. Such graphs, when coupled with physiological and behavioral data, are likely to facilitate greater understanding of how circuits in these networks give rise to complex information processing capabilities. Given that the automated or semi-automated methods required to achieve the acquisition of these graphs are still evolving, we develop a metric for measuring the performance of such methods by comparing their output with those generated by human annotators (ground truth data). Whereas classic metrics for comparing annotated neural tissue reconstructions generally do so at the voxel level, the metric proposed here measures the integrity of neurons based on the degree to which a collection of synaptic terminals belonging to a single neuron of the reconstruction can be matched to those of a single neuron in the ground truth data. The metric is largely insensitive to small errors in segmentation and more directly measures accuracy of the generated brain graph. It is our hope that use of the metric will facilitate the broader communitys efforts to improve upon existing methods for acquiring brain graphs. Herein we describe the metric in detail, provide demonstrative examples of the intuitive scores it generates, and apply it to a synthesized neural network with simulated reconstruction errors.
We investigate Turings notion of an A-type artificial neural network. We study a refinement of Turings original idea, motivated by work of Teuscher, Bull, Preen and Copeland. Our A-types can process binary data by accepting and outputting sequences of binary vectors; hence we can associate a function to an A-type, and we say the A-type {em represents} the function. There are two modes of data processing: clamped and sequential. We describe an evolutionary algorithm, involving graph-theoretic manipulations of A-types, which searches for A-types representing a given function. The algorithm uses both mutation and crossover operators. We implemented the algorithm and applied it to three benchmark tasks. We found that the algorithm performed much better than a random search. For two out of the three tasks, the algorithm with crossover performed better than a mutation-only version.