ﻻ يوجد ملخص باللغة العربية
Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.
Neuromorphic computing takes inspiration from the brain to create energy efficient hardware for information processing, capable of highly sophisticated tasks. In this article, we make the case that building this new hardware necessitates reinventing
Machine learning software applications are nowadays ubiquitous in many fields of science and society for their outstanding capability of solving computationally vast problems like the recognition of patterns and regularities in big datasets. One of t
Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of n
Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In this architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer
Despite neuromorphic engineering promises the deployment of low latency, adaptive and low power systems that can lead to the design of truly autonomous artificial agents, the development of a fully neuromorphic artificial agent is still missing. Whil