No Arabic abstract
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.
Coarse-graining microscopic models of biological neural networks to obtain mesoscopic models of neural activities is an essential step towards multi-scale models of the brain. Here, we extend a recent theory for mesoscopic population dynamics with static synapses to the case of dynamic synapses exhibiting short-term plasticity (STP). Under the assumption that spike arrivals at synapses have Poisson statistics, we derive analytically stochastic mean-field dynamics for the effective synaptic coupling between finite-size populations undergoing Tsodyks-Markram STP. The novel mean-field equations account for both finite number of synapses and correlations between the neurotransmitter release probability and the fraction of available synaptic resources. Comparisons with Monte Carlo simulations of the microscopic model show that in both feedforward and recurrent networks the mesoscopic mean-field model accurately reproduces stochastic realizations of the total synaptic input into a postsynaptic neuron and accounts for stochastic switches between Up and Down states as well as for population spikes. The extended mesoscopic population theory of spiking neural networks with STP may be useful for a systematic reduction of detailed biophysical models of cortical microcircuits to efficient and mathematically tractable mean-field models.
Short-term presynaptic plasticity designates variations of the amplitude of synaptic information transfer whereby the amount of neurotransmitter released upon presynaptic stimulation changes over seconds as a function of the neuronal firing activity. While a consensus has emerged that changes of the synapse strength are crucial to neuronal computations, their modes of expression in vivo remain unclear. Recent experimental studies have reported that glial cells, particularly astrocytes in the hippocampus, are able to modulate short-term plasticity but the underlying mechanism is poorly understood. Here, we investigate the characteristics of short-term plasticity modulation by astrocytes using a biophysically realistic computational model. Mean-field analysis of the model unravels that astrocytes may mediate counterintuitive effects. Depending on the expressed presynaptic signaling pathways, astrocytes may globally inhibit or potentiate the synapse: the amount of released neurotransmitter in the presence of the astrocyte is transiently smaller or larger than in its absence. But this global effect usually coexists with the opposite local effect on paired pulses: with release-decreasing astrocytes most paired pulses become facilitated, while paired-pulse depression becomes prominent under release-increasing astrocytes. Moreover, we show that the frequency of astrocytic intracellular Ca2+ oscillations controls the effects of the astrocyte on short-term synaptic plasticity. Our model explains several experimental observations yet unsolved, and uncovers astrocytic gliotransmission as a possible transient switch between short-term paired-pulse depression and facilitation. This possibility has deep implications on the processing of neuronal spikes and resulting information transfer at synapses.
We show that the local Spike Timing-Dependent Plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDPs polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise weights that exceed a non-zero threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network comprising random weights drawn from certain distributions. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range synaptic loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.
We first review traditional approaches to memory storage and formation, drawing on the literature of quantitative neuroscience as well as statistical physics. These have generally focused on the fast dynamics of neurons; however, there is now an increasing emphasis on the slow dynamics of synapses, whose weight changes are held to be responsible for memory storage. An important first step in this direction was taken in the context of Fusis cascade model, where complex synaptic architectures were invoked, in particular, to store long-term memories. No explicit synaptic dynamics were, however, invoked in that work. These were recently incorporated theoretically using the techniques used in agent-based modelling, and subsequently, models of competing and cooperating synapses were formulated. It was found that the key to the storage of long-term memories lay in the competitive dynamics of synapses. In this review, we focus on models of synaptic competition and cooperation, and look at the outstanding challenges that remain.
Recently Segev et al. (Phys. Rev. E 64,2001, Phys.Rev.Let. 88, 2002) made long-term observations of spontaneous activity of in-vitro cortical networks, which differ from predictions of current models in many features. In this paper we generalize the EI cortical model introduced in a previous paper (S.Scarpetta et al. Neural Comput. 14, 2002), including intrinsic white noise and analyzing effects of noise on the spontaneous activity of the nonlinear system, in order to account for the experimental results of Segev et al.. Analytically we can distinguish different regimes of activity, depending from the model parameters. Using analytical results as a guide line, we perform simulations of the nonlinear stochastic model in two different regimes, B and C. The Power Spectrum Density (PSD) of the activity and the Inter-Event-Interval (IEI) distributions are computed, and compared with experimental results. In regime B the network shows stochastic resonance phenomena and noise induces aperiodic collective synchronous oscillations that mimic experimental observations at 0.5 mM Ca concentration. In regime C the model shows spontaneous synchronous periodic activity that mimic activity observed at 1 mM Ca concentration and the PSD shows two peaks at the 1st and 2nd harmonics in agreement with experiments at 1 mM Ca. Moreover (due to intrinsic noise and nonlinear activation function effects) the PSD shows a broad band peak at low frequency. This feature, observed experimentally, does not find explanation in the previous models. Besides we identify parametric changes (namely increase of noise or decreasing of excitatory connections) that reproduces the fading of periodicity found experimentally at long times, and we identify a way to discriminate between those two possible effects measuring experimentally the low frequency PSD.