No Arabic abstract
In this letter, we first derive the analytical channel impulse response for a cylindrical synaptic channel surrounded by glial cells and validate it with particle-based simulations. Afterwards, we provide an accurate analytical approximation for the long-time decay rate of the channel impulse response by employing Taylor expansion to the characteristic equations that determine the decay rates of the system. We validate our approximation by comparing it with the numerical decay rate obtained from the characteristic equation. Overall, we provide a fully analytical description for the long-time behavior of synaptic diffusion, e.g., the clean-up processes inside the channel after communication has long concluded.
We first review traditional approaches to memory storage and formation, drawing on the literature of quantitative neuroscience as well as statistical physics. These have generally focused on the fast dynamics of neurons; however, there is now an increasing emphasis on the slow dynamics of synapses, whose weight changes are held to be responsible for memory storage. An important first step in this direction was taken in the context of Fusis cascade model, where complex synaptic architectures were invoked, in particular, to store long-term memories. No explicit synaptic dynamics were, however, invoked in that work. These were recently incorporated theoretically using the techniques used in agent-based modelling, and subsequently, models of competing and cooperating synapses were formulated. It was found that the key to the storage of long-term memories lay in the competitive dynamics of synapses. In this review, we focus on models of synaptic competition and cooperation, and look at the outstanding challenges that remain.
We show that the local Spike Timing-Dependent Plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDPs polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise weights that exceed a non-zero threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network comprising random weights drawn from certain distributions. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range synaptic loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.
We derive analytical formulae for the firing rate of integrate-and-fire neurons endowed with realistic synaptic dynamics. In particular we include the possibility of multiple synaptic inputs as well as the effect of an absolute refractory period into the description.
Protein synthesis-dependent, late long-term potentiation (LTP) and depression (LTD) at glutamatergic hippocampal synapses are well characterized examples of long-term synaptic plasticity. Persistent increased activity of the enzyme protein kinase M (PKM) is thought essential for maintaining LTP. Additional spatial and temporal features that govern LTP and LTD induction are embodied in the synaptic tagging and capture (STC) and cross capture hypotheses. Only synapses that have been tagged by an stimulus sufficient for LTP and learning can capture PKM. A model was developed to simulate the dynamics of key molecules required for LTP and LTD. The model concisely represents relationships between tagging, capture, LTD, and LTP maintenance. The model successfully simulated LTP maintained by persistent synaptic PKM, STC, LTD, and cross capture, and makes testable predictions concerning the dynamics of PKM. The maintenance of LTP, and consequently of at least some forms of long-term memory, is predicted to require continual positive feedback in which PKM enhances its own synthesis only at potentiated synapses. This feedback underlies bistability in the activity of PKM. Second, cross capture requires the induction of LTD to induce dendritic PKM synthesis, although this may require tagging of a nearby synapse for LTP. The model also simulates the effects of PKM inhibition, and makes additional predictions for the dynamics of CaM kinases. Experiments testing the above predictions would significantly advance the understanding of memory maintenance.
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.