No Arabic abstract
This thesis is designed to be a self-contained exposition of the neurobiological and mathematical aspects of sensory perception, memory, and learning with a bias towards olfaction. The final chapters introduce a new approach to modeling focusing more on the geometry of the system as opposed to element wise dynamics. Additionally, we construct an organism independent model for olfactory processing: something which is currently missing from the literature.
The receptive fields of simple cells in the visual cortex can be understood as linear filters. These filters can be modelled by Gabor functions, or by Gaussian derivatives. Gabor functions can also be combined in an `energy model of the complex cell response. This paper proposes an alternative model of the complex cell, based on Gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous approaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the twod differential structure of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the Gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The relevance of the new model to the cortical image representation is discussed.
This paper proposes a novel topological learning framework that can integrate brain networks of different sizes and topology through persistent homology. This is possible through the introduction of a new topological loss function that enables such challenging task. The use of the proposed loss function bypasses the intrinsic computational bottleneck associated with matching networks. We validate the method in extensive statistical simulations with ground truth to assess the effectiveness of the topological loss in discriminating networks with different topology. The method is further applied to a twin brain imaging study in determining if the brain network is genetically heritable. The challenge is in overlaying the topologically different functional brain networks obtained from the resting-state functional MRI (fMRI) onto the template structural brain network obtained through the diffusion MRI (dMRI).
Neural population activity is theorized to reflect an underlying dynamical structure. This structure can be accurately captured using state space models with explicit dynamics, such as those based on recurrent neural networks (RNNs). However, using recurrence to explicitly model dynamics necessitates sequential processing of data, slowing real-time applications such as brain-computer interfaces. Here we introduce the Neural Data Transformer (NDT), a non-recurrent alternative. We test the NDTs ability to capture autonomous dynamical systems by applying it to synthetic datasets with known dynamics and data from monkey motor cortex during a reaching task well-modeled by RNNs. The NDT models these datasets as well as state-of-the-art recurrent models. Further, its non-recurrence enables 3.9ms inference, well within the loop time of real-time applications and more than 6 times faster than recurrent baselines on the monkey reaching dataset. These results suggest that an explicit dynamics model is not necessary to model autonomous neural population dynamics. Code: https://github.com/snel-repo/neural-data-transformers
The Reward Prediction Error hypothesis proposes that phasic activity in the midbrain dopaminergic system reflects prediction errors needed for learning in reinforcement learning. Besides the well-documented association between dopamine and reward processing, dopamine is implicated in a variety of functions without a clear relationship to reward prediction error. Fluctuations in dopamine levels influence the subjective perception of time, dopamine bursts precede the generation of motor responses, and the dopaminergic system innervates regions of the brain, including hippocampus and areas in prefrontal cortex, whose function is not uniquely tied to reward. In this manuscript, we propose that a common theme linking these functions is representation, and that prediction errors signaled by the dopamine system, in addition to driving associative learning, can also support the acquisition of adaptive state representations. In a series of simulations, we show how this extension can account for the role of dopamine in temporal and spatial representation, motor response, and abstract categorization tasks. By extending the role of dopamine signals to learning state representations, we resolve a critical challenge to the Reward Prediction Error hypothesis of dopamine function.
Most of mathematic forgetting curve models fit well with the forgetting data under the learning condition of one time rather than repeated. In the paper, a convolution model of forgetting curve is proposed to simulate the memory process during learning. In this model, the memory ability (i.e. the central procedure in the working memory model) and learning material (i.e. the input in the working memory model) is regarded as the system function and the input function, respectively. The status of forgetting (i.e. the output in the working memory model) is regarded as output function or the convolution result of the memory ability and learning material. The model is applied to simulate the forgetting curves in different situations. The results show that the model is able to simulate the forgetting curves not only in one time learning condition but also in multi-times condition. The model is further verified in the experiments of Mandarin tone learning for Japanese learners. And the predicted curve fits well on the test points.