ترغب بنشر مسار تعليمي؟ اضغط هنا

Design of Task-Specific Optical Systems Using Broadband Diffractive Neural Networks

82   0   0.0 ( 0 )
 نشر من قبل Aydogan Ozcan
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

We report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tunable, single passband as well as dual passband spectral filters, and (2) spatially-controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep learning-based design strategy, broadband diffractive neural networks help us engineer light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.

قيم البحث

اقرأ أيضاً

We introduce an all-optical Diffractive Deep Neural Network (D2NN) architecture that can learn to implement various functions after deep learning-based design of passive diffractive layers that work collectively. We experimentally demonstrated the su ccess of this framework by creating 3D-printed D2NNs that learned to implement handwritten digit classification and the function of an imaging lens at terahertz spectrum. With the existing plethora of 3D-printing and other lithographic fabrication methods as well as spatial-light-modulators, this all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs.
For the benefit of designing scalable, fault resistant optical neural networks (ONNs), we investigate the effects architectural designs have on the ONNs robustness to imprecise components. We train two ONNs -- one with a more tunable design (GridNet) and one with better fault tolerance (FFTNet) -- to classify handwritten digits. When simulated without any imperfections, GridNet yields a better accuracy (~98%) than FFTNet (~95%). However, under a small amount of error in their photonic components, the more fault tolerant FFTNet overtakes GridNet. We further provide thorough quantitative and qualitative analyses of ONNs sensitivity to varying levels and types of imprecisions. Our results offer guidelines for the principled design of fault-tolerant ONNs as well as a foundation for further research.
Deep neural networks (DNNs) have substantial computational requirements, which greatly limit their performance in resource-constrained environments. Recently, there are increasing efforts on optical neural networks and optical computing based DNNs ha rdware, which bring significant advantages for deep learning systems in terms of their power efficiency, parallelism and computational speed. Among them, free-space diffractive deep neural networks (D$^2$NNs) based on the light diffraction, feature millions of neurons in each layer interconnected with neurons in neighboring layers. However, due to the challenge of implementing reconfigurability, deploying different DNNs algorithms requires re-building and duplicating the physical diffractive systems, which significantly degrades the hardware efficiency in practical application scenarios. Thus, this work proposes a novel hardware-software co-design method that enables robust and noise-resilient Multi-task Learning in D$^2$NNs. Our experimental results demonstrate significant improvements in versatility and hardware efficiency, and also demonstrate the robustness of proposed multi-task D$^2$NN architecture under wide noise ranges of all system components. In addition, we propose a domain-specific regularization algorithm for training the proposed multi-task architecture, which can be used to flexibly adjust the desired performance for each task.
Recent studies have shown convolutional neural networks (CNNs) can be trained to perform modal decomposition using intensity images of optical fields. A fundamental limitation of these techniques is that the modal phases can not be uniquely calculate d using a single intensity image. The knowledge of modal phases is crucial for wavefront sensing, alignment and mode matching applications. Heterodyne imaging techniques can provide images of the transverse complex amplitude & phase profile of laser beams at high resolutions and frame rates. In this work we train a CNN to perform modal decomposition using simulated heterodyne images, allowing the complete modal phases to be predicted. This is to our knowledge the first machine learning decomposition scheme to utilize complex phase information to perform modal decomposition. We compare our network with a traditional overlap integral & center-of-mass centering algorithm and show that it is both less sensitive to beam centering and on average more accurate.
122 - Yong-Liang Xiao 2020
Realization of deep learning with coherent diffraction has achieved remarkable development nowadays, which benefits on the fact that matrix multiplication can be optically executed in parallel as well as with little power consumption. Coherent optica l field propagated in the form of complex-value entity can be manipulated into a task-oriented output with statistical inference. In this paper, we present a unitary learning protocol on deep diffractive neural network, meeting the physical unitary prior in coherent diffraction. Unitary learning is a backpropagation serving to unitary weights update through the gradient translation between Euclidean and Riemannian space. The temporal-space evolution characteristic in unitary learning is formulated and elucidated. Particularly a compatible condition on how to select the nonlinear activations in complex space is unveiled, encapsulating the fundamental sigmoid, tanh and quasi-ReLu in complex space. As a preliminary application, deep diffractive neural network with unitary learning is tentatively implemented on the 2D classification and verification tasks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا