No Arabic abstract
Research in photonic computing has flourished due to the proliferation of optoelectronic components on photonic integration platforms. Photonic integrated circuits have enabled ultrafast artificial neural networks, providing a framework for a new class of information processing machines. Algorithms running on such hardware have the potential to address the growing demand for machine learning and artificial intelligence, in areas such as medical diagnosis, telecommunications, and high-performance and scientific computing. In parallel, the development of neuromorphic electronics has highlighted challenges in that domain, in particular, related to processor latency. Neuromorphic photonics offers sub-nanosecond latencies, providing a complementary opportunity to extend the domain of artificial intelligence. Here, we review recent advances in integrated photonic neuromorphic systems, discuss current and future challenges, and outline the advances in science and technology needed to meet those challenges.
Nanophotonics has been an active research field over the past two decades, triggered by the rising interests in exploring new physics and technologies with light at the nanoscale. As the demands of performance and integration level keep increasing, the design and optimization of nanophotonic devices become computationally expensive and time-inefficient. Advanced computational methods and artificial intelligence, especially its subfield of machine learning, have led to revolutionary development in many applications, such as web searches, computer vision, and speech/image recognition. The complex models and algorithms help to exploit the enormous parameter space in a highly efficient way. In this review, we summarize the recent advances on the emerging field where nanophotonics and machine learning blend. We provide an overview of different computational methods, with the focus on deep learning, for the nanophotonic inverse design. The implementation of deep neural networks with photonic platforms is also discussed. This review aims at sketching an illustration of the nanophotonic design with machine learning and giving a perspective on the future tasks.
All-optical binary convolution with a photonic spiking vertical-cavity surface-emitting laser (VCSEL) neuron is proposed and demonstrated experimentally for the first time. Optical inputs, extracted from digital images and temporally encoded using rectangular pulses, are injected in the VCSEL neuron which delivers the convolution result in the number of fast (<100 ps long) spikes fired. Experimental and numerical results show that binary convolution is achieved successfully with a single spiking VCSEL neuron and that all-optical binary convolution can be used to calculate image gradient magnitudes to detect edge features and separate vertical and horizontal components in source images. We also show that this all-optical spiking binary convolution system is robust to noise and can operate with high-resolution images. Additionally, the proposed system offers important advantages such as ultrafast speed, high energy efficiency and simple hardware implementation, highlighting the potentials of spiking photonic VCSEL neurons for high-speed neuromorphic image processing systems and future photonic spiking convolutional neural networks.
Since the experimental discovery of magnetic skyrmions achieved one decade ago, there have been significant efforts to bring the virtual particles into all-electrical fully functional devices, inspired by their fascinating physical and topological properties suitable for future low-power electronics. Here, we experimentally demonstrate such a device: electrically-operating skyrmion-based artificial synaptic device designed for neuromorphic computing. We present that controlled current-induced creation, motion, detection and deletion of skyrmions in ferrimagnetic multilayers can be harnessed in a single device at room temperature to imitate the behaviors of biological synapses. Using simulations, we demonstrate that such skyrmion-based synapses could be used to perform neuromorphic pattern-recognition computing using handwritten recognition data set, reaching to the accuracy of ~89 percents, comparable to the software-based training accuracy of ~94 percents. Chip-level simulation then highlights the potential of skyrmion synapse compared to existing technologies. Our findings experimentally illustrate the basic concepts of skyrmion-based fully functional electronic devices while providing a new building block in the emerging field of spintronics-based bio-inspired computing.
Neuromorphic computing describes the use of VLSI systems to mimic neuro-biological architectures and is also looked at as a promising alternative to the traditional von Neumann architecture. Any new computing architecture would need a system that can perform floating-point arithmetic. In this paper, we describe a neuromorphic system that performs IEEE 754-compliant floating-point multiplication. The complex process of multiplication is divided into smaller sub-tasks performed by components Exponent Adder, Bias Subtractor, Mantissa Multiplier and Sign OF/UF. We study the effect of the number of neurons per bit on accuracy and bit error rate, and estimate the optimal number of neurons needed for each component.
Convolutional Neural Networks (CNNs) are powerful and highly ubiquitous tools for extracting features from large datasets for applications such as computer vision and natural language processing. However, a convolution is a computationally expensive operation in digital electronics. In contrast, neuromorphic photonic systems, which have experienced a recent surge of interest over the last few years, propose higher bandwidth and energy efficiencies for neural network training and inference. Neuromorphic photonics exploits the advantages of optical electronics, including the ease of analog processing, and busing multiple signals on a single waveguide at the speed of light. Here, we propose a Digital Electronic and Analog Photonic (DEAP) CNN hardware architecture that has potential to be 2.8 to 14 times faster while maintaining the same power usage of current state-of-the-art GPUs.