No Arabic abstract
We first review traditional approaches to memory storage and formation, drawing on the literature of quantitative neuroscience as well as statistical physics. These have generally focused on the fast dynamics of neurons; however, there is now an increasing emphasis on the slow dynamics of synapses, whose weight changes are held to be responsible for memory storage. An important first step in this direction was taken in the context of Fusis cascade model, where complex synaptic architectures were invoked, in particular, to store long-term memories. No explicit synaptic dynamics were, however, invoked in that work. These were recently incorporated theoretically using the techniques used in agent-based modelling, and subsequently, models of competing and cooperating synapses were formulated. It was found that the key to the storage of long-term memories lay in the competitive dynamics of synapses. In this review, we focus on models of synaptic competition and cooperation, and look at the outstanding challenges that remain.
Protein synthesis-dependent, late long-term potentiation (LTP) and depression (LTD) at glutamatergic hippocampal synapses are well characterized examples of long-term synaptic plasticity. Persistent increased activity of the enzyme protein kinase M (PKM) is thought essential for maintaining LTP. Additional spatial and temporal features that govern LTP and LTD induction are embodied in the synaptic tagging and capture (STC) and cross capture hypotheses. Only synapses that have been tagged by an stimulus sufficient for LTP and learning can capture PKM. A model was developed to simulate the dynamics of key molecules required for LTP and LTD. The model concisely represents relationships between tagging, capture, LTD, and LTP maintenance. The model successfully simulated LTP maintained by persistent synaptic PKM, STC, LTD, and cross capture, and makes testable predictions concerning the dynamics of PKM. The maintenance of LTP, and consequently of at least some forms of long-term memory, is predicted to require continual positive feedback in which PKM enhances its own synthesis only at potentiated synapses. This feedback underlies bistability in the activity of PKM. Second, cross capture requires the induction of LTD to induce dendritic PKM synthesis, although this may require tagging of a nearby synapse for LTP. The model also simulates the effects of PKM inhibition, and makes additional predictions for the dynamics of CaM kinases. Experiments testing the above predictions would significantly advance the understanding of memory maintenance.
Brain plasticity refers to brains ability to change neuronal connections, as a result of environmental stimuli, new experiences, or damage. In this work, we study the effects of the synaptic delay on both the coupling strengths and synchronisation in a neuronal network with synaptic plasticity. We build a network of Hodgkin-Huxley neurons, where the plasticity is given by the Hebbian rules. We verify that without time delay the excitatory synapses became stronger from the high frequency to low frequency neurons and the inhibitory synapses increases in the opposite way, when the delay is increased the network presents a non-trivial topology. Regarding the synchronisation, only for small values of the synaptic delay this phenomenon is observed.
Neural connectivity at the cellular and mesoscopic level appears very specific and is presumed to arise from highly specific developmental mechanisms. However, there are general shared features of connectivity in systems as different as the networks formed by individual neurons in Caenorhabditis elegans or in rat visual cortex and the mesoscopic circuitry of cortical areas in the mouse, macaque, and human brain. In all these systems, connection length distributions have very similar shapes, with an initial large peak and a long flat tail representing the admixture of long-distance connections to mostly short-distance connections. Furthermore, not all potentially possible synapses are formed, and only a fraction of axons (called filling fraction) establish synapses with spatially neighboring neurons. We explored what aspects of these connectivity patterns can be explained simply by random axonal outgrowth. We found that random axonal growth away from the soma can already reproduce the known distance distribution of connections. We also observed that experimentally observed filling fractions can be generated by competition for available space at the target neurons--a model markedly different from previous explanations. These findings may serve as a baseline model for the development of connectivity that can be further refined by more specific mechanisms.
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.
In this letter, we first derive the analytical channel impulse response for a cylindrical synaptic channel surrounded by glial cells and validate it with particle-based simulations. Afterwards, we provide an accurate analytical approximation for the long-time decay rate of the channel impulse response by employing Taylor expansion to the characteristic equations that determine the decay rates of the system. We validate our approximation by comparing it with the numerical decay rate obtained from the characteristic equation. Overall, we provide a fully analytical description for the long-time behavior of synaptic diffusion, e.g., the clean-up processes inside the channel after communication has long concluded.