Do you want to publish a course? Click here

Analog Computing with Metatronic Circuits

199   0   0.0 ( 0 )
 Added by Volker Sorger
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Analog photonic solutions offer unique opportunities to address complex computational tasks with unprecedented performance in terms of energy dissipation and speeds, overcoming current limitations of modern computing architectures based on electron flows and digital approaches. The lack of modularization and lumped element reconfigurability in photonics has prevented the transition to an all-optical analog computing platform. Here, we explore a nanophotonic platform based on epsilon-near-zero materials capable of solving in the analog domain partial differential equations (PDE). Wavelength stretching in zero-index media enables highly nonlocal interactions within the board based on the conduction of electric displacement, which can be monitored to extract the solution of a broad class of PDE problems. By exploiting control of deposition technique through process parameters, we demonstrate the possibility of implementing the proposed nano-optic processor using CMOS-compatible indium-tin-oxide, whose optical properties can be tuned by carrier injection to obtain programmability at high speeds and low energy requirements. Our nano-optical analog processor can be integrated at chip-scale, processing arbitrary inputs at the speed of light.



rate research

Read More

To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy-efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Here, we introduce a general in-the-loop learning framework based on surrogate gradients that resolves these issues. Using the BrainScaleS-2 neuromorphic system, we show that learning self-corrects for device mismatch resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, far less than one spike per hidden neuron and input, perform inference at rates of up to 85 k frames/second, and consume less than 200 mW. In summary, our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Analog electronic and optical computing exhibit tremendous advantages over digital computing for accelerating deep learning when operations are executed at low precision. In this work, we derive a relationship between analog precision, which is limited by noise, and digital bit precision. We propose extending analog computing architectures to support varying levels of precision by repeating operations and averaging the result, decreasing the impact of noise. Such architectures enable programmable tradeoffs between precision and other desirable performance metrics such as energy efficiency or throughput. To utilize dynamic precision, we propose a method for learning the precision of each layer of a pre-trained model without retraining network weights. We evaluate this method on analog architectures subject to a variety of noise sources such as shot noise, thermal noise, and weight noise and find that employing dynamic precision reduces energy consumption by up to 89% for computer vision models such as Resnet50 and by 24% for natural language processing models such as BERT. In one example, we apply dynamic precision to a shot-noise limited homodyne optical neural network and simulate inference at an optical energy consumption of 2.7 aJ/MAC for Resnet50 and 1.6 aJ/MAC for BERT with <2% accuracy degradation.
We introduce a Lyapunov function for the dynamics of memristive circuits, and compare the effectiveness of memristors in minimizing the function to widely used optimization software. We study in particular three classes of problems which can be directly embedded in a circuit topology, and show that memristors effectively attempt at (quickly) extremizing these functionals.
A new approach to perform analog optical differentiation is presented using half-wavelength slabs. First, a half-wavelength dielectric slab is used to design a first order differentiator. The latter works properly for both major polarizations, in contrast to designs based on Brewster effect [Opt. Lett. 41, 3467 (2016)]. Inspired by the proposed dielectric differentiator, and by exploiting the unique features of graphene, we further design and demonstrate a reconfigurable and highly miniaturized differentiator using a half-wavelength plasmonic graphene film. To the best of our knowledge, our proposed graphene-based differentiator is even smaller than the most compact differentiator presented so far [Opt. Lett. 40, 5239 (2015)].
The rapidity and low power consumption of superconducting electronics makes them an ideal substrate for physical reservoir computing, which commandeers the computational power inherent to the evolution of a dynamical system for the purposes of performing machine learning tasks. We focus on a subset of superconducting circuits that exhibit soliton-like dynamics in simple transmission line geometries. With numerical simulations we demonstrate the effectiveness of these circuits in performing higher-order parity calculations and channel equalization at rates approaching 100 Gb/s. The availability of a proven superconducting logic scheme considerably simplifies the path to a fully integrated reservoir computing platform and makes superconducting reservoirs an enticing substrate for high rate signal processing applications.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا