ترغب بنشر مسار تعليمي؟ اضغط هنا

Floating-Point Multiplication Using Neuromorphic Computing

102   0   0.0 ( 0 )
 نشر من قبل Shrisha Rao
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.

قيم البحث

اقرأ أيضاً

Neuromorphic computing takes inspiration from the brain to create energy efficient hardware for information processing, capable of highly sophisticated tasks. In this article, we make the case that building this new hardware necessitates reinventing electronics. We show that research in physics and material science will be key to create artificial nano-neurons and synapses, to connect them together in huge numbers, to organize them in complex systems, and to compute with them efficiently. We describe how some researchers choose to take inspiration from artificial intelligence to move forward in this direction, whereas others prefer taking inspiration from neuroscience, and we highlight recent striking results obtained with these two approaches. Finally, we discuss the challenges and perspectives in neuromorphic physics, which include developing the algorithms and the hardware hand in hand, making significant advances with small toy systems, as well as building large scale networks.
Machine learning software applications are nowadays ubiquitous in many fields of science and society for their outstanding capability of solving computationally vast problems like the recognition of patterns and regularities in big datasets. One of t he main goals of research is the realization of a physical neural network able to perform data processing in a much faster and energy-efficient way than the state-of-the-art technology. Here we show that lattices of exciton-polariton condensates accomplish neuromorphic computing using fast optical nonlinearities and with lower error rate than any previous hardware implementation. We demonstrate that our neural network significantly increases the recognition efficiency compared to the linear classification algorithms on one of the most widely used benchmarks, the MNIST problem, showing a concrete advantage from the integration of optical systems in reservoir computing architectures.
Neurons in the brain behave as non-linear oscillators, which develop rhythmic activity and interact to process information. Taking inspiration from this behavior to realize high density, low power neuromorphic computing will require huge numbers of n anoscale non-linear oscillators. Indeed, a simple estimation indicates that, in order to fit a hundred million oscillators organized in a two-dimensional array inside a chip the size of a thumb, their lateral dimensions must be smaller than one micrometer. However, despite multiple theoretical proposals, there is no proof of concept today of neuromorphic computing with nano-oscillators. Indeed, nanoscale devices tend to be noisy and to lack the stability required to process data in a reliable way. Here, we show experimentally that a nanoscale spintronic oscillator can achieve spoken digit recognition with accuracies similar to state of the art neural networks. We pinpoint the regime of magnetization dynamics leading to highest performance. These results, combined with the exceptional ability of these spintronic oscillators to interact together, their long lifetime, and low energy consumption, open the path to fast, parallel, on-chip computation based on networks of oscillators.
Modern computation based on the von Neumann architecture is today a mature cutting-edge science. In this architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex and unstructured data as our brain does. Neuromorphic computing systems are aimed at addressing these needs. The human brain performs about 10^15 calculations per second using 20W and a 1.2L volume. By taking inspiration from biology, new generation computers could have much lower power consumption than conventional processors, could exploit integrated non-volatile memory and logic, and could be explicitly designed to support dynamic learning in the context of complex and unstructured data. Among their potential future applications, business, health care, social security, disease and viruses spreading control might be the most impactful at societal level. This roadmap envisages the potential applications of neuromorphic materials in cutting edge technologies and focuses on the design and fabrication of artificial neural systems. The contents of this roadmap will highlight the interdisciplinary nature of this activity which takes inspiration from biology, physics, mathematics, computer science and engineering. This will provide a roadmap to explore and consolidate new technology behind both present and future applications in many technologically relevant areas.
Despite neuromorphic engineering promises the deployment of low latency, adaptive and low power systems that can lead to the design of truly autonomous artificial agents, the development of a fully neuromorphic artificial agent is still missing. Whil e neuromorphic sensing and perception, as well as decision-making systems, are now mature, the control and actuation part is lagging behind. In this paper, we present a closed-loop motor controller implemented on mixed-signal analog-digital neuromorphic hardware using a spiking neural network. The network performs a proportional control action by encoding target, feedback, and error signals using a spiking relational network. It continuously calculates the error through a connectivity pattern, which relates the three variables by means of feed-forward connections. Recurrent connections within each population are used to speed up the convergence, decrease the effect of mismatch and improve selectivity. The neuromorphic motor controller is interfaced with the iCub robot simulator. We tested our spiking P controller in a single joint control task, specifically for the robot head yaw. The spiking controller sends the target positions, reads the motor state from its encoder, and sends back the motor commands to the joint. The performance of the spiking controller is tested in a step response experiment and in a target pursuit task. In this work, we optimize the network structure to make it more robust to noisy inputs and device mismatch, which leads to better control performances.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا