No Arabic abstract
We present a mathematical model which explains and interprets a novel form of short-term potentiation, which was found to be use-, but not time-dependent, in experiments done on Lymnaea neurons. The high degree of potentiation is explained using a model of synaptic metaplasticity, while the use-dependence (which is critically reliant on the presence of kinase in the experiment) is explained using a model of a stochastic and bistable biological switch.
While deep neural networks have surpassed human performance in multiple situations, they are prone to catastrophic forgetting: upon training a new task, they rapidly forget previously learned ones. Neuroscience studies, based on idealized tasks, suggest that in the brain, synapses overcome this issue by adjusting their plasticity depending on their past history. However, such metaplastic behaviours do not transfer directly to mitigate catastrophic forgetting in deep neural networks. In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting. Building on this idea, we propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data, nor formal boundaries between datasets and with performance approaching more mainstream techniques with task boundaries. We support our approach with a theoretical analysis on a tractable task. This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems, especially when using novel nanodevices featuring physics analogous to metaplasticity.
Unlike the brain, artificial neural networks, including state-of-the-art deep neural networks for computer vision, are subject to catastrophic forgetting: they rapidly forget the previous task when trained on a new one. Neuroscience suggests that biological synapses avoid this issue through the process of synaptic consolidation and metaplasticity: the plasticity itself changes upon repeated synaptic events. In this work, we show that this concept of metaplasticity can be transferred to a particular type of deep neural networks, binarized neural networks, to reduce catastrophic forgetting.
Behavioral homogeneity is often critical for the functioning of network systems of interacting entities. In power grids, whose stable operation requires generator frequencies to be synchronized--and thus homogeneous--across the network, previous work suggests that the stability of synchronous states can be improved by making the generators homogeneous. Here, we show that a substantial additional improvement is possible by instead making the generators suitably heterogeneous. We develop a general method for attributing this counterintuitive effect to converse symmetry breaking, a recently established phenomenon in which the system must be asymmetric to maintain a stable symmetric state. These findings constitute the first demonstration of converse symmetry breaking in real-world systems, and our method promises to enable identification of this phenomenon in other networks whose functions rely on behavioral homogeneity.
Protein synthesis-dependent, late long-term potentiation (LTP) and depression (LTD) at glutamatergic hippocampal synapses are well characterized examples of long-term synaptic plasticity. Persistent increased activity of the enzyme protein kinase M (PKM) is thought essential for maintaining LTP. Additional spatial and temporal features that govern LTP and LTD induction are embodied in the synaptic tagging and capture (STC) and cross capture hypotheses. Only synapses that have been tagged by an stimulus sufficient for LTP and learning can capture PKM. A model was developed to simulate the dynamics of key molecules required for LTP and LTD. The model concisely represents relationships between tagging, capture, LTD, and LTP maintenance. The model successfully simulated LTP maintained by persistent synaptic PKM, STC, LTD, and cross capture, and makes testable predictions concerning the dynamics of PKM. The maintenance of LTP, and consequently of at least some forms of long-term memory, is predicted to require continual positive feedback in which PKM enhances its own synthesis only at potentiated synapses. This feedback underlies bistability in the activity of PKM. Second, cross capture requires the induction of LTD to induce dendritic PKM synthesis, although this may require tagging of a nearby synapse for LTP. The model also simulates the effects of PKM inhibition, and makes additional predictions for the dynamics of CaM kinases. Experiments testing the above predictions would significantly advance the understanding of memory maintenance.
Despite considerable progress in genome- and proteome-based high-throughput screening methods and in rational drug design, the increase in approved drugs in the past decade did not match the increase of drug development costs. Network description and analysis not only give a systems-level understanding of drug action and disease complexity, but can also help to improve the efficiency of drug design. We give a comprehensive assessment of the analytical tools of network topology and dynamics. The state-of-the-art use of chemical similarity, protein structure, protein-protein interaction, signaling, genetic interaction and metabolic networks in the discovery of drug targets is summarized. We propose that network targeting follows two basic strategies. The central hit strategy selectively targets central nodes/edges of the flexible networks of infectious agents or cancer cells to kill them. The network influence strategy works against other diseases, where an efficient reconfiguration of rigid networks needs to be achieved by targeting the neighbors of central nodes or edges. It is shown how network techniques can help in the identification of single-target, edgetic, multi-target and allo-network drug target candidates. We review the recent boom in network methods helping hit identification, lead selection optimizing drug efficacy, as well as minimizing side-effects and drug toxicity. Successful network-based drug development strategies are shown through the examples of infections, cancer, metabolic diseases, neurodegenerative diseases and aging. Summarizing more than 1200 references we suggest an optimized protocol of network-aided drug development, and provide a list of systems-level hallmarks of drug quality. Finally, we highlight network-related drug development trends helping to achieve these hallmarks by a cohesive, global approach.