ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantifying power use in silicon photonic neural networks

78   0   0.0 ( 0 )
 نشر من قبل Alexander Tait
 تاريخ النشر 2021
والبحث باللغة English
 تأليف Alexander N. Tait




اسأل ChatGPT حول البحث

Due to challenging efficiency limits facing conventional and unconventional electronic architectures, information processors based on photonics have attracted renewed interest. Research communities have yet to settle on definitive techniques to describe the performance of this class of information processors. Photonic systems are different from electronic ones, so the existing concepts of computer performance measurement cannot necessarily apply. In this manuscript, we attempt to quantify the power use of photonic neural networks with state-of-the-art and future hardware. We derive scaling laws, physical limits, and new platform performance metrics. We find that overall performance is regime-like, which means that energy efficiency characteristics of a photonic processor can be completely described by no less than seven performance numbers. The introduction of these analytical strategies provides a much needed foundation for quantitative roadmapping and commercial value assignment for silicon photonic neural networks.



قيم البحث

اقرأ أيضاً

The number of parameters in deep neural networks (DNNs) is scaling at about 5$times$ the rate of Moores Law. To sustain the pace of growth of the DNNs, new technologies and computing architectures are needed. Photonic computing systems are promising avenues, since they can perform the dominant general matrix-matrix multiplication (GEMM) operations in DNNs at a higher throughput than their electrical counterpart. However, purely photonic systems face several challenges including a lack of photonic memory, the need for conversion circuits, and the accumulation of noise. In this paper, we propose a hybrid electro-photonic system realizing the best of both worlds to accelerate DNNs. In contrast to prior work in photonic and electronic accelerators, we adopt a system-level perspective. Our electro-photonic system includes an electronic host processor and DRAM, and a custom electro-photonic hardware accelerator called ADEPT. The fused hardware accelerator leverages a photonic computing unit for performing highly-efficient GEMM operations and a digital electronic ASIC for storage and for performing non-GEMM operations. We also identify architectural optimization opportunities for improving the overall ADEPTs efficiency. We evaluate ADEPT using three state-of-the-art neural networks-ResNet-50, BERT-large, and RNN-T-to show its general applicability in accelerating todays DNNs. A head-to-head comparison of ADEPT with systolic array architectures shows that ADEPT can provide, on average, 7.19$times$ higher inference throughput per watt.
Silicon-photonic neural networks (SPNNs) offer substantial improvements in computing speed and energy efficiency compared to their digital electronic counterparts. However, the energy efficiency and accuracy of SPNNs are highly impacted by uncertaint ies that arise from fabrication-process and thermal variations. In this paper, we present the first comprehensive and hierarchical study on the impact of random uncertainties on the classification accuracy of a Mach-Zehnder Interferometer (MZI)-based SPNN. We show that such impact can vary based on both the location and characteristics (e.g., tuned phase angles) of a non-ideal silicon-photonic device. Simulation results show that in an SPNN with two hidden layers and 1374 tunable-thermal-phase shifters, random uncertainties even in mature fabrication processes can lead to a catastrophic 70% accuracy loss.
Deeplearning algorithms are revolutionising many aspects of modern life. Typically, they are implemented in CMOS-based hardware with severely limited memory access times and inefficient data-routing. All-optical neural networks without any electro-optic
Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multi- plications, which are used heavily in artificial ne ural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.
Optical implementation of artificial neural networks has been attracting great attention due to its potential in parallel computation at speed of light. Although all-optical deep neural networks (AODNNs) with a few neurons have been experimentally de monstrated with acceptable errors recently, the feasibility of large scale AODNNs remains unknown because error might accumulate inevitably with increasing number of neurons and connections. Here, we demonstrate a scalable AODNN with programmable linear operations and tunable nonlinear activation functions. We verify its scalability by measuring and analyzing errors propagating from a single neuron to the entire network. The feasibility of AODNNs is further confirmed by recognizing handwritten digits and fashions respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا