No Arabic abstract
We report experimentally and in theory on the detection of edge information in digital images using ultrafast spiking optical artificial neurons towards convolutional neural networks (CNNs). In tandem with traditional convolution techniques, a photonic neuron model based on a Vertical-Cavity Surface Emitting Laser (VCSEL) is implemented experimentally to threshold and activate fast spiking responses upon the detection of target edge features in digital images. Edges of different directionalities are detected using individual kernel operators and complete image edge detection is achieved using gradient magnitude. Importantly, the neuromorphic (brain-like) image edge detection system of this work uses commercially sourced VCSELs exhibiting spiking responses at sub-nanosecond rates (many orders of magnitude faster than biological neurons) and operating at the telecom wavelength of 1300 nm; hence making our approach compatible with optical communication and data-center technologies. These results therefore have exciting prospects for ultrafast photonic implementations of neural networks towards computer vision and decision making systems for future artificial intelligence applications.
All-optical binary convolution with a photonic spiking vertical-cavity surface-emitting laser (VCSEL) neuron is proposed and demonstrated experimentally for the first time. Optical inputs, extracted from digital images and temporally encoded using rectangular pulses, are injected in the VCSEL neuron which delivers the convolution result in the number of fast (<100 ps long) spikes fired. Experimental and numerical results show that binary convolution is achieved successfully with a single spiking VCSEL neuron and that all-optical binary convolution can be used to calculate image gradient magnitudes to detect edge features and separate vertical and horizontal components in source images. We also show that this all-optical spiking binary convolution system is robust to noise and can operate with high-resolution images. Additionally, the proposed system offers important advantages such as ultrafast speed, high energy efficiency and simple hardware implementation, highlighting the potentials of spiking photonic VCSEL neurons for high-speed neuromorphic image processing systems and future photonic spiking convolutional neural networks.
We present a new convolutional neural network (CNN) based ImageJ plugin for fluorescence microscopy image denoising with an average improvement of 7.5 dB in peak signal-to-noise ratio (PSNR) and denoising instantly within 80 msec.
Purpose: We propose a deep learning-based computer-aided detection (CADe) method to detect breast lesions in ultrafast DCE-MRI sequences. This method uses both the three-dimensional spatial information and temporal information obtained from the early-phase of the dynamic acquisition. Methods: The proposed CADe method, based on a modified 3D RetinaNet model, operates on ultrafast T1 weighted sequences, which are preprocessed for motion compensation, temporal normalization, and are cropped before passing into the model. The model is optimized to enable the detection of relatively small breast lesions in a screening setting, focusing on detection of lesions that are harder to differentiate from confounding structures inside the breast. Results: The method was developed based on a dataset consisting of 489 ultrafast MRI studies obtained from 462 patients containing a total of 572 lesions (365 malignant, 207 benign) and achieved a detection rate, sensitivity, and detection rate of benign lesions of 0.90 (0.876-0.934), 0.95 (0.934-0.980), and 0.81 (0.751-0.871) at 4 false positives per normal breast with 10-fold cross-testing, respectively. Conclusions: The deep learning architecture used for the proposed CADe application can efficiently detect benign and malignant lesions on ultrafast DCE-MRI. Furthermore, utilizing the less visible hard-to detect-lesions in training improves the learning process and, subsequently, detection of malignant breast lesions.
We train spiking deep networks using leaky integrate-and-fire (LIF) neurons, and achieve state-of-the-art results for spiking networks on the CIFAR-10 and MNIST datasets. This demonstrates that biologically-plausible spiking LIF neurons can be integrated into deep networks can perform as well as other spiking models (e.g. integrate-and-fire). We achieved this result by softening the LIF response function, such that its derivative remains bounded, and by training the network with noise to provide robustness against the variability introduced by spikes. Our method is general and could be applied to other neuron types, including those used on modern neuromorphic hardware. Our work brings more biological realism into modern image classification models, with the hope that these models can inform how the brain performs this difficult task. It also provides new methods for training deep networks to run on neuromorphic hardware, with the aim of fast, power-efficient image classification for robotics applications.
We introduce a new supervised learning algorithm based to train spiking neural networks for classification. The algorithm overcomes a limitation of existing multi-spike learning methods: it solves the problem of interference between interacting output spikes during a learning trial. This problem of learning interference causes learning performance in existing approaches to decrease as the number of output spikes increases, and represents an important limitation in existing multi-spike learning approaches. We address learning interference by introducing a novel mechanism to balance the magnitudes of weight adjustments during learning, which in theory allows every spike to simultaneously converge to their desired timings. Our results indicate that our method achieves significantly higher memory capacity and faster convergence compared to existing approaches for multi-spike classification. In the ubiquitous Iris and MNIST datasets, our algorithm achieves competitive predictive performance with state-of-the-art approaches.