No Arabic abstract
Understanding the basic operational logics of the nervous system is essential to advancing neuroscientific research. However, theoretical efforts to tackle this fundamental problem are lacking, despite the abundant empirical data about the brain that has been collected in the past few decades. To address this shortcoming, this document introduces a hypothetical framework for the functional nature of primitive neural networks. It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world and thus enable an adaptive system of behavior. More significantly, the network achieves this without participating in an algorithmic structure. When a neurons activation represents some symbolic element in the environment, each of its synapses can indicate a potential change to the element and its future state. The efficacy of a synaptic connection further specifies the elements particular probability for, or contribution to, such a change. As it fires, a neurons activation is transformed to its postsynaptic targets, resulting in a chronological shift of the represented elements. As the inherent function of summation in a neuron integrates the various presynaptic contributions, the neural network mimics the collective causal relationship of events in the observed environment.
Feedforward networks (FFN) are ubiquitous structures in neural systems and have been studied to understand mechanisms of reliable signal and information transmission. In many FFNs, neurons in one layer have intrinsic properties that are distinct from those in their pre-/postsynaptic layers, but how this affects network-level information processing remains unexplored. Here we show that layer-to-layer heterogeneity arising from lamina-specific cellular properties facilitates signal and information transmission in FFNs. Specifically, we found that signal transformations, made by each layer of neurons on an input-driven spike signal, demodulate signal distortions introduced by preceding layers. This mechanism boosts information transfer carried by a propagating spike signal and thereby supports reliable spike signal and information transmission in a deep FFN. Our study suggests that distinct cell types in neural circuits, performing different computational functions, facilitate information processing on the whole.
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically unfeasible even in dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct approximations to network structural connectivities from network activity monitored through calcium fluorescence imaging. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time-series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the effective network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (e.g., bursting or non-bursting). We thus demonstrate how conditioning with respect to the global mean activity improves the performance of our method. [...] Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good reconstruction of the network clustering coefficient, allowing to discriminate between weakly or strongly clustered topologies, whereas on the other hand an approach based on cross-correlations would invariantly detect artificially high levels of clustering. Finally, we present the applicability of our method to real recordings of in vitro cortical cultures. We demonstrate that these networks are characterized by an elevated level of clustering compared to a random graph (although not extreme) and by a markedly non-local connectivity.
Statistical properties of spike trains as well as other neurophysiological data suggest a number of mathematical models of neurons. These models range from entirely descriptive ones to those deduced from the properties of the real neurons. One of them, the diffusion leaky integrate-and-fire neuronal model, which is based on the Ornstein-Uhlenbeck stochastic process that is restricted by an absorbing barrier, can describe a wide range of neuronal activity in terms of its parameters. These parameters are readily associated with known physiological mechanisms. The other model is descriptive, Gamma renewal process, and its parameters only reflect the observed experimental data or assumed theoretical properties. Both of these commonly used models are related here. We show under which conditions the Gamma model is an output from the diffusion Ornstein-Uhlenbeck model. In some cases we can see that the Gamma distribution is unrealistic to be achieved for the employed parameters of the Ornstein-Uhlenbeck process.
Mounting evidence in neuroscience suggests the possibility of neuronal representations that individual neurons serve as the substrates of different mental representations in a point-to-point way. Combined with associationism, it can potentially address a range of theoretical problems and provide a straightforward explanation for our cognition. However, this idea is merely a hypothesis with many questions unsolved. In this paper, I will bring up a new framework to defend the idea of neuronal representations. The strategy is from micro- to macro-level. Specifically, in the micro-level, I first propose that our brain prefers and preserves more active neurons. Yet as total chance of discharge, neurons must take strategies to fire more strongly and frequently. Then I describe how they take synaptic plasticity, inhibition, and synchronization as their strategies and demonstrate how the execution of these strategies during turn them into specialized neurons that selectively but strongly respond to familiar entities. In the macro-level, I further discuss how these specialized neurons underlie various cognitive functions and phenomena. Significantly, this paper, through defending neuronal representation, introduces a novel way to understand the relationship between brain and cognition.
Neural coding is a field of study that concerns how sensory information is represented in the brain by networks of neurons. The link between external stimulus and neural response can be studied from two parallel points of view. The first, neural encoding refers to the mapping from stimulus to response, and primarily focuses on understanding how neurons respond to a wide variety of stimuli, and on constructing models that accurately describe the stimulus-response relationship. Neural decoding, on the other hand, refers to the reverse mapping, from response to stimulus, where the challenge is to reconstruct a stimulus from the spikes it evokes. Since neuronal response is stochastic, a one-to-one mapping of stimuli into neural responses does not exist, causing a mismatch between the two viewpoints of neural coding. Here, we use these two perspectives to investigate the question of what rate coding is, in the simple setting of a single stationary stimulus parameter and a single stationary spike train represented by a renewal process. We show that when rate codes are defined in terms of encoding, i.e., the stimulus parameter is mapped onto the mean firing rate, the rate decoder given by spike counts or the sample mean, does not always efficiently decode the rate codes, but can improve efficiency in reading certain rate codes, when correlations within a spike train are taken into account.