ترغب بنشر مسار تعليمي؟ اضغط هنا

While Moores law has driven exponential computing power expectations, its nearing end calls for new avenues for improving the overall system performance. One of these avenues is the exploration of new alternative brain-inspired computing architecture s that promise to achieve the flexibility and computational efficiency of biological neural processing systems. Within this context, neuromorphic intelligence represents a paradigm shift in computing based on the implementation of spiking neural network architectures tightly co-locating processing and memory. In this paper, we provide a comprehensive overview of the field, highlighting the different levels of granularity present in existing silicon implementations, comparing approaches that aim at replicating natural intelligence (bottom-up) versus those that aim at solving practical artificial intelligence applications (top-down), and assessing the benefits of the different circuit design styles used to achieve these goals. First, we present the analog, mixed-signal and digital circuit design styles, identifying the boundary between processing and memory through time multiplexing, in-memory computation and novel devices. Next, we highlight the key tradeoffs for each of the bottom-up and top-down approaches, survey their silicon implementations, and carry out detailed comparative analyses to extract design guidelines. Finally, we identify both necessary synergies and missing elements required to achieve a competitive advantage for neuromorphic edge computing over conventional machine-learning accelerators, and outline the key elements for a framework toward neuromorphic intelligence.
283 - Giacomo Indiveri 2021
The standard nature of computing is currently being challenged by a range of problems that start to hinder technological progress. One of the strategies being proposed to address some of these problems is to develop novel brain-inspired processing me thods and technologies, and apply them to a wide range of application scenarios. This is an extremely challenging endeavor that requires researchers in multiple disciplines to combine their efforts and co-design at the same time the processing methods, the supporting computing architectures, and their underlying technologies. The journal ``Neuromorphic Computing and Engineering (NCE) has been launched to support this new community in this effort and provide a forum and repository for presenting and discussing its latest advances. Through close collaboration with our colleagues on the editorial team, the scope and characteristics of NCE have been designed to ensure it serves a growing transdisciplinary and dynamic community across academia and industry.
The development of memristive device technologies has reached a level of maturity to enable the design of complex and large-scale hybrid memristive-CMOS neural processing systems. These systems offer promising solutions for implementing novel in-memo ry computing architectures for machine learning and data analysis problems. We argue that they are also ideal building blocks for the integration in neuromorphic electronic circuits suitable for ultra-low power brain-inspired sensory processing systems, therefore leading to the innovative solutions for always-on edge-computing and Internet-of-Things (IoT) applications. Here we present a recipe for creating such systems based on design strategies and computing principles inspired by those used in mammalian brains. We enumerate the specifications and properties of memristive devices required to support always-on learning in neuromorphic computing systems and to minimize their power consumption. Finally, we discuss in what cases such neuromorphic systems can complement conventional processing ones and highlight the importance of exploiting the physics of both the memristive devices and of the CMOS circuits interfaced to them.
Homeostatic plasticity is a stabilizing mechanism that allows neural systems to maintain their activity around a functional operating point. This is an extremely useful mechanism for neuromorphic computing systems, as it can be used to compensate for chronic shifts, for example due to changes in the network structure. However, it is important that this plasticity mechanism operates on time scales that are much longer than conventional synaptic plasticity ones, in order to not interfere with the learning process. In this paper we present a novel ultra-low leakage cell and an automatic gain control scheme that can adapt the gain of analog log-domain synapse circuits over extremely long time scales. To validate the proposed scheme, we implemented the ultra-low leakage cell in a standard 180 nm Complementary Metal-Oxide-Semiconductor (CMOS) process, and integrated it in an array of dynamic synapses connected to an adaptive integrate and fire neuron. We describe the circuit and demonstrate how it can be configured to scale the gain of all synapses afferent to the silicon neuron in a way to keep the neurons average firing rate constant around a set operating point. The circuit occupies a silicon area of 84 {mu}m x 22 {mu}m and consumes approximately 10.8 nW with a 1.8 V supply voltage. It exhibits time constants of up to 25 kilo-seconds, thanks to a controllable leakage current that can be scaled down to 1.2 atto-Amps (7.5 electrons/s).
Neuromorphic systems typically employ current-mode circuits that model neural dynamics and produce output currents that range from few pico-Amperes to hundreds of micro-Amperes. On-line real-time monitoring of the signals produced by these circuits i s crucial, for prototyping and debugging purposes, as well as for analyzing and understanding the network dynamics and computational properties. To this end, we propose a compact on-chip auto-scaling Current to Frequency Converter (CFC) for real-time monitoring of analog currents in mixed-signal/analog neuromorphic electronic systems. The proposed CFC is a self-timed asynchronous circuit that has a wide dynamic input range of up to 6 decades, ranging from pico-Amps to micro-Amps, with high current measurement sensitivity. To produce a linear output frequency response, while properly covering the wide dynamic input range, the circuit automatically detects the scale of the input current and adjusts the scale of its output firing rate accordingly. Here we describe the proposed circuit and present experimental results measured from multiple instances of the circuit, implemented using a standard 180 nm CMOS process, and interfaced to silicon neuron and synapse circuits for real-time current monitoring. We demonstrate how the circuit is suitable for measuring neural dynamics by showing the converted response properties of the chip silicon neurons and synapses as they are stimulated by input spikes.
Developing mixed-signal analog-digital neuromorphic circuits in advanced scaled processes poses significant design challenges. We present compact and energy efficient sub-threshold analog synapse and neuron circuits, optimized for a 28 nm FD-SOI proc ess, to implement massively parallel large-scale neuromorphic computing systems. We describe the techniques used for maximizing density with mixed-mode analog/digital synaptic weight configurations, and the methods adopted for minimizing the effect of channel leakage current, in order to implement efficient analog computation based on pA-nA small currents. We present circuit simulation results, based on a new chip that has been recently taped out, to demonstrate how the circuits can be useful for both low-frequency operation in systems that need to interact with the environment in real-time, and for high-frequency operation for fast data processing in different types of spiking neural network architectures.
We present a power efficient clock-less fully asynchronous bit-serial Low Voltage Differential Signaling (LVDS) link with event-driven instant wake-up and self-sleep features, optimized for high speed inter-chip communication of asynchronous address- events between neuromorphic chips. The proposed LVDS link makes use of the Level-Encoded Dual-Rail (LEDR) representation and a token-ring architecture to encode and transmit data, avoiding the use of conventional large ClockData Recovery (CDR) modules with power-hungry DLL or PLL circuits. We implemented the LVDS circuits in a device fabricated with a standard 0.18 um CMOS process. The total silicon area used for such block is of 0.14 mm^2. We present experimental measurement results to demonstrate that, with a bit rate of 1.5 Gbps and an event width of 32-bit, the proposed LVDS link can achieve transmission event rates of 35.7 M Events/second with current consumption of 19.3 mA and 3.57 mA for receiver and transmitter blocks, respectively. Given the clock-less and instant on/off design choices made, the power consumption of the whole link depends linearly on the data transmission rate. We show that the current consumption can go down to sub-uA for low event rates (e.g., <1k Events/second), with a floor of 80 nA for transmitter and 42 nA for receiver, determined mainly by static off-leakage currents.
Homeostatic plasticity is a stabilizing mechanism commonly observed in real neural systems that allows neurons to maintain their activity around a functional operating point. This phenomenon can be used in neuromorphic systems to compensate for slowl y changing conditions or chronic shifts in the system configuration. However, to avoid interference with other adaptation or learning processes active in the neuromorphic system, it is important that the homeostatic plasticity mechanism operates on time scales that are much longer than conventional synaptic plasticity ones. In this paper we present an ultra-low leakage circuit, integrated into an automatic gain control scheme, that can implement the synaptic scaling homeostatic process over extremely long time scales. Synaptic scaling consists in globally scaling the synaptic weights of all synapses impinging onto a neuron maintaining their relative differences, to preserve the effects of learning. The scheme we propose controls the global gain of analog log-domain synapse circuits to keep the neurons average firing rate constant around a set operating point, over extremely long time scales. To validate the proposed scheme, we implemented the ultra-low leakage synaptic scaling homeostatic plasticity circuit in a standard 0.18 $mu$m Complementary Metal-Oxide Semiconductor (CMOS) process, and integrated it in an array of dynamic synapses connected to an adaptive integrate and fire neuron. The circuit occupies a silicon area of 84 $mu$m x 22 $mu$m and consumes approximately 10.8 nW with a 1.8 V supply voltage. We present experimental results from the homeostatic circuit and demonstrate how it can be configured to exhibit time scales of up to 100 kilo-seconds, thanks to a controllable leakage current that can be scaled down to 0.45 atto-Amperes (2.8 electrons/s).
Despite their advantages in terms of computational resources, latency, and power consumption, event-based implementations of neural networks have not been able to achieve the same performance figures as their equivalent state-of-the-art deep network models. We propose counter neurons as minimal spiking neuron models which only require addition and comparison operations, thus avoiding costly multiplications. We show how inference carried out in deep counter networks converges to the same accuracy levels as are achieved with state-of-the-art conventional networks. As their event-based style of computation leads to reduced latency and sparse updates, counter networks are ideally suited for efficient compact and low-power hardware implementation. We present theory and training methods for counter networks, and demonstrate on the MNIST benchmark that counter networks converge quickly, both in terms of time and number of operations required, to state-of-the-art classification accuracy.
A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need f or increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا