ﻻ يوجد ملخص باللغة العربية
Due to challenging efficiency limits facing conventional and unconventional electronic architectures, information processors based on photonics have attracted renewed interest. Research communities have yet to settle on definitive techniques to describe the performance of this class of information processors. Photonic systems are different from electronic ones, so the existing concepts of computer performance measurement cannot necessarily apply. In this manuscript, we attempt to quantify the power use of photonic neural networks with state-of-the-art and future hardware. We derive scaling laws, physical limits, and new platform performance metrics. We find that overall performance is regime-like, which means that energy efficiency characteristics of a photonic processor can be completely described by no less than seven performance numbers. The introduction of these analytical strategies provides a much needed foundation for quantitative roadmapping and commercial value assignment for silicon photonic neural networks.
The number of parameters in deep neural networks (DNNs) is scaling at about 5$times$ the rate of Moores Law. To sustain the pace of growth of the DNNs, new technologies and computing architectures are needed. Photonic computing systems are promising
Silicon-photonic neural networks (SPNNs) offer substantial improvements in computing speed and energy efficiency compared to their digital electronic counterparts. However, the energy efficiency and accuracy of SPNNs are highly impacted by uncertaint
Deeplearning algorithms are revolutionising many aspects of modern life. Typically, they are implemented in CMOS-based hardware with severely limited memory access times and inefficient data-routing. All-optical neural networks without any electro-optic
Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multi- plications, which are used heavily in artificial ne
Optical implementation of artificial neural networks has been attracting great attention due to its potential in parallel computation at speed of light. Although all-optical deep neural networks (AODNNs) with a few neurons have been experimentally de