No Arabic abstract
We present results from a new approach to learning and plasticity in neuromorphic hardware systems: to enable flexibility in implementable learning mechanisms while keeping high efficiency associated with neuromorphic implementations, we combine a general-purpose processor with full-custom analog elements. This processor is operating in parallel with a fully parallel neuromorphic system consisting of an array of synapses connected to analog, continuous time neuron circuits. Novel analog correlation sensor circuits process spike events for each synapse in parallel and in real-time. The processor uses this pre-processing to compute new weights possibly using additional information following its program. Therefore, learning rules can be defined in software giving a large degree of flexibility. Synapses realize correlation detection geared towards Spike-Timing Dependent Plasticity (STDP) as central computational primitive in the analog domain. Operating at a speed-up factor of 1000 compared to biological time-scale, we measure time-constants from tens to hundreds of micro-seconds. We analyze variability across multiple chips and demonstrate learning using a multiplicative STDP rule. We conclude, that the presented approach will enable flexible and efficient learning as a platform for neuroscientific research and technological applications.
Flexible metal oxide/graphene oxide hybrid multi-gate neuron transistors were fabricated on flexible graphene substrates. Dendritic integrations in both spatial and temporal modes were successfully emulated, and spatiotemporal correlated logics were obtained. A proof-of-principle visual system model for emulating lobula giant motion detector neuron was investigated. Our results are of great interest for flexible neuromorphic cognitive systems.
We review our current software tools and theoretical methods for applying the Neural Engineering Framework to state-of-the-art neuromorphic hardware. These methods can be used to implement linear and nonlinear dynamical systems that exploit axonal transmission time-delays, and to fully account for nonideal mixed-analog-digital synapses that exhibit higher-order dynamics with heterogeneous time-constants. This summarizes earli
We show that a model of the hippocampus introduced recently by Scarpetta, Zhaoping & Hertz ([2002] Neural Computation 14(10):2371-96), explains the theta phase precession phenomena. In our model, the theta phase precession comes out as a consequence of the associative-memory-like network dynamics, i.e. the networks ability to imprint and recall oscillatory patterns, coded both by phases and amplitudes of oscillation. The learning rule used to imprint the oscillatory states is a natural generalization of that used for static patterns in the Hopfield model, and is based on the spike time dependent synaptic plasticity (STDP), experimentally observed. In agreement with experimental findings, the place cells activity appears at consistently earlier phases of subsequent cycles of the ongoing theta rhythm during a pass through the place field, while the oscillation amplitude of the place cells firing rate increases as the animal approaches the center of the place field and decreases as the animal leaves the center. The total phase precession of the place cell is lower than 360 degrees, in agreement with experiments. As the animal enters a receptive field the place cells activity comes slightly less than 180 degrees after the phase of maximal pyramidal cell population activity, in agreement with the findings of Skaggs et al (1996). Our model predicts that the theta phase is much better correlated with location than with time spent in the receptive field. Finally, in agreement with the recent experimental findings of Zugaro et al (2005), our model predicts that theta phase precession persists after transient intra-hippocampal perturbation.
Neuromorphic networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs). Though immense acceleration of the training process can be achieved by leveraging the fact that the time complexity of training does not scale with the network size, it is limited by the space complexity of stochastic gradient descent, which grows quadratically. The main objective of this work is to reduce this space complexity by using low-rank approximations of stochastic gradient descent. This low spatial complexity combined with streaming methods allows for significant reductions in memory and compute overhead, opening the doors for improvements in area, time and energy efficiency of training. We refer to this algorithm and architecture to implement it as the streaming batch eigenupdate (SBE) approach.
Neuronal firing activities have attracted a lot of attention since a large population of spatiotemporal patterns in the brain is the basis for adaptive behavior and can also reveal the signs for various neurological disorders including Alzheimers, schizophrenia, epilepsy and others. Here, we study the dynamics of a simple neuronal network using different sets of settings on a neuromorphic chip. We observed three different types of collective neuronal firing activities, which agree with the clinical data taken from the brain. We constructed a brain phase diagram and showed that within the weak noise region, the brain is operating in an expected noise-induced phase (N-phase) rather than at a so-called self-organized critical boundary. The significance of this study is twofold: first, the deviation of neuronal activities from the normal brain could be symptomatic of diseases of the central nervous system, thus paving the way for new diagnostics and treatments; second, the normal brain states in the N-phase are optimal for computation and information processing. The latter may provide a way to establish powerful new computing paradigm using collective behavior of networks of spiking neurons.