No Arabic abstract
Synaptic memory is considered to be the main element responsible for learning and cognition in humans. Although traditionally non-volatile long-term plasticity changes have been implemented in nanoelectronic synapses for neuromorphic applications, recent studies in neuroscience have revealed that biological synapses undergo meta-stable volatile strengthening followed by a long-term strengthening provided that the frequency of the input stimulus is sufficiently high. Such memory strengthening and memory decay functionalities can potentially lead to adaptive neuromorphic architectures. In this paper, we demonstrate the close resemblance of the magnetization dynamics of a Magnetic Tunnel Junction (MTJ) to short-term plasticity and long-term potentiation observed in biological synapses. We illustrate that, in addition to the magnitude and duration of the input stimulus, frequency of the stimulus plays a critical role in determining long-term potentiation of the MTJ. Such MTJ synaptic memory arrays can be utilized to create compact, ultra-fast and low power intelligent neural systems.
Recent breakthroughs in recurrent deep neural networks with long short-term memory (LSTM) units has led to major advances in artificial intelligence. State-of-the-art LSTM models with significantly increased complexity and a large number of parameters, however, have a bottleneck in computing power resulting from limited memory capacity and data communication bandwidth. Here we demonstrate experimentally that LSTM can be implemented with a memristor crossbar, which has a small circuit footprint to store a large number of parameters and in-memory computing capability that circumvents the von Neumann bottleneck. We illustrate the capability of our system by solving real-world problems in regression and classification, which shows that memristor LSTM is a promising low-power and low-latency hardware platform for edge inference.
Congenital cognitive dysfunctions are frequently due to deficits in molecular pathways that underlie synaptic plasticity. For example, Rubinstein-Taybi syndrome (RTS) is due to a mutation in cbp, encoding the histone acetyltransferase CREB-binding protein (CBP). CBP is a transcriptional co-activator for CREB, and induction of CREB-dependent transcription plays a key role in long-term memory (LTM). In animal models of RTS, mutations of cbp impair LTM and late-phase long-term potentiation (LTP). To explore intervention strategies to rescue the deficits in LTP, we extended a previous model of LTP induction to describe histone acetylation and simulated LTP impairment due to cbp mutation. Plausible drug effects were simulated by parameter changes, and many increased LTP. However no parameter variation consistent with a biochemical effect of a known drug fully restored LTP. Thus we examined paired parameter variations. A pair that simulated the effects of a phosphodiesterase inhibitor (slowing cAMP degradation) concurrent with a deacetylase inhibitor (prolonging histone acetylation) restored LTP. Importantly these paired parameter changes did not alter basal synaptic weight. A pair that simulated a phosphodiesterase inhibitor and an acetyltransferase activator was similarly effective. For both pairs strong additive synergism was present. These results suggest that promoting histone acetylation while simultaneously slowing the degradation of cAMP may constitute a promising strategy for restoring deficits in LTP that may be associated with learning deficits in RTS. More generally these results illustrate the strategy of combining modeling and empirical studies may help design effective therapies for improving long-term synaptic plasticity and learning in cognitive disorders.
We investigate a new method to augment recurrent neural networks with extra memory without increasing the number of network parameters. The system has an associative memory based on complex-valued vectors and is closely related to Holographic Reduced Representations and Long Short-Term Memory networks. Holographic Reduced Representations have limited capacity: as they store more information, each retrieval becomes noisier due to interference. Our system in contrast creates redundant copies of stored information, which enables retrieval with reduced noise. Experiments demonstrate faster learning on multiple memorization tasks.
Orchestration of diverse synaptic plasticity mechanisms across different timescales produces complex cognitive processes. To achieve comparable cognitive complexity in memristive neuromorphic systems, devices that are capable to emulate short- and long-term plasticity (STP and LTP, respectively) concomitantly are essential. However, this fundamental bionic trait has not been reported in any existing memristors where STP and LTP can only be induced selectively because of the inability to be decoupled using different loci and mechanisms. In this work, we report the first demonstration of truly concomitant STP and LTP in a three-terminal memristor that uses independent physical phenomena to represent each form of plasticity. The emerging layered material Bi2O2Se is used in memristor for the first time, opening up the prospects for ultra-thin, high-speed and low-power neuromorphic devices. The concerted action of STP and LTP in our memristor allows full-range modulation of the transient synaptic efficacy, from depression to facilitation, by stimulus frequency or intensity, providing a versatile device platform for neuromorphic function implementation. A recurrent neural circuitry model is developed to simulate the intricate sleep-wake cycle autoregulation process, in which the concomitance of STP and LTP is posited as a key factor in enabling this neural homeostasis. This work sheds new light on the highly sophisticated computational capabilities of memristors and their prospects for realization of advanced neuromorphic functions.
Analysis pipelines commonly use high-level technologies that are popular when created, but are unlikely to be readable, executable, or sustainable in the long term. A set of criteria is introduced to address this problem: Completeness (no execution requirement beyond a minimal Unix-like operating system, no administrator privileges, no network connection, and storage primarily in plain text); modular design; minimal complexity; scalability; verifiable inputs and outputs; version control; linking analysis with narrative; and free software. As a proof of concept, we introduce Maneage (Managing data lineage), enabling cheap archiving, provenance extraction, and peer verification that been tested in several research publications. We show that longevity is a realistic requirement that does not sacrifice immediate or short-term reproducibility. The caveats (with proposed solutions) are then discussed and we conclude with the benefits for the various stakeholders. This paper is itself written with Maneage (project commit eeff5de).