ترغب بنشر مسار تعليمي؟ اضغط هنا

Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit

84   0   0.0 ( 0 )
 نشر من قبل Xing Lin
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Application-specific optical processors have been considered disruptive technologies for modern computing that can fundamentally accelerate the development of artificial intelligence (AI) by offering substantially improved computing performance. Recent advancements in optical neural network architectures for neural information processing have been applied to perform various machine learning tasks. However, the existing architectures have limited complexity and performance; and each of them requires its own dedicated design that cannot be reconfigured to switch between different neural network models for different applications after deployment. Here, we propose an optoelectronic reconfigurable computing paradigm by constructing a diffractive processing unit (DPU) that can efficiently support different neural networks and achieve a high model complexity with millions of neurons. It allocates almost all of its computational operations optically and achieves extremely high speed of data modulation and large-scale network parameter updating by dynamically programming optical modulators and photodetectors. We demonstrated the reconfiguration of the DPU to implement various diffractive feedforward and recurrent neural networks and developed a novel adaptive training approach to circumvent the system imperfections. We applied the trained networks for high-speed classifying of handwritten digit images and human action videos over benchmark datasets, and the experimental results revealed a comparable classification accuracy to the electronic computing approaches. Furthermore, our prototype system built with off-the-shelf optoelectronic components surpasses the performance of state-of-the-art graphics processing units (GPUs) by several times on computing speed and more than an order of magnitude on system energy efficiency.



قيم البحث

اقرأ أيضاً

Many architects believe that major improvements in cost-energy-performance must now come from domain-specific hardware. This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates t he inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory. The TPUs deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency. The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters. Our workload, written in the high-level TensorFlow framework, uses production NN applications (MLPs, CNNs, and LSTMs) that represent 95% of our datacenters NN inference demand. Despite low utilization for some applications, the TPU is on average about 15X - 30X faster than its contemporary GPU or CPU, with TOPS/Watt about 30X - 80X higher. Moreover, using the GPUs GDDR5 memory in the TPU would triple achieved TOPS and raise TOPS/Watt to nearly 70X the GPU and 200X the CPU.
392 - Ke Yu , Zexian Li , Yue Peng 2021
Image Signal Processor (ISP) is a crucial component in digital cameras that transforms sensor signals into images for us to perceive and understand. Existing ISP designs always adopt a fixed architecture, e.g., several sequential modules connected in a rigid order. Such a fixed ISP architecture may be suboptimal for real-world applications, where camera sensors, scenes and tasks are diverse. In this study, we propose a novel Reconfigurable ISP (ReconfigISP) whose architecture and parameters can be automatically tailored to specific data and tasks. In particular, we implement several ISP modules, and enable backpropagation for each module by training a differentiable proxy, hence allowing us to leverage the popular differentiable neural architecture search and effectively search for the optimal ISP architecture. A proxy tuning mechanism is adopted to maintain the accuracy of proxy networks in all cases. Extensive experiments conducted on image restoration and object detection, with different sensors, light conditions and efficiency constraints, validate the effectiveness of ReconfigISP. Only hundreds of parameters need tuning for every task.
169 - Bo Wang , Jun Zhou , Weng-Fai Wong 2019
The next wave of on-device AI will likely require energy-efficient deep neural networks. Brain-inspired spiking neural networks (SNN) has been identified to be a promising candidate. Doing away with the need for multipliers significantly reduces ener gy. For on-device applications, besides computation, communication also incurs a significant amount of energy and time. In this paper, we propose Shenjing, a configurable SNN architecture which fully exposes all on-chip communications to software, enabling software mapping of SNN models with high accuracy at low power. Unlike prior SNN architectures like TrueNorth, Shenjing does not require any model modification and retraining for the mapping. We show that conventional artificial neural networks (ANN) such as multilayer perceptron, convolutional neural networks, as well as the latest residual neural networks can be mapped successfully onto Shenjing, realizing ANNs with SNNs energy efficiency. For the MNIST inference problem using a multilayer perceptron, we were able to achieve an accuracy of 96% while consuming just 1.26mW using 10 Shenjing cores.
This paper reports a comprehensive study on the applicability of ultra-scaled ferroelectric FinFETs with 6 nm thick hafnium zirconium oxide layer for neuromorphic computing in the presence of process variation, flicker noise, and device aging. An int ricate study has been conducted about the impact of such variations on the inference accuracy of pre-trained neural networks consisting of analog, quaternary (2-bit/cell) and binary synapse. A pre-trained neural network with 97.5% inference accuracy on the MNIST dataset has been adopted as the baseline. Process variation, flicker noise, and device aging characterization have been performed and a statistical model has been developed to capture all these effects during neural network simulation. Extrapolated retention above 10 years have been achieved for binary read-out procedure. We have demonstrated that the impact of (1) retention degradation due to the oxide thickness scaling, (2) process variation, and (3) flicker noise can be abated in ferroelectric FinFET based binary neural networks, which exhibits superior performance over quaternary and analog neural network, amidst all variations. The performance of a neural network is the result of coalesced performance of device, architecture and algorithm. This research corroborates the applicability of deeply scaled ferroelectric FinFETs for non-von Neumann computing with proper combination of architecture and algorithm.
Accelerating computational tasks with quantum resources is a widely-pursued goal that is presently limited by the challenges associated with high-fidelity control of many-body quantum systems. The paradigm of reservoir computing presents an attractiv e alternative, especially in the noisy intermediate-scale quantum era, since control over the internal system state and knowledge of its dynamics are not required. Instead, complex, unsupervised internal trajectories through a large state space are leveraged as a computational resource. Quantum systems offer a unique venue for reservoir computing, given the presence of interactions unavailable in analogous classical systems, and the potential for a computational space that grows exponentially with physical system size. Here, we consider a reservoir comprised of a single qudit ($d$-dimensional quantum system). We demonstrate a robust performance advantage compared to an analogous classical system accompanied by a clear improvement with Hilbert space dimension for two benchmark tasks: signal processing and short-term memory capacity. Qudit reservoirs are directly realized by current-era quantum hardware, offering immediate practical implementation, and a promising outlook for increased performance in larger systems.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا