ترغب بنشر مسار تعليمي؟ اضغط هنا

Coupled nonlinear delay systems as deep convolutional neural networks

100   0   0.0 ( 0 )
 نشر من قبل Daniel Brunner
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural networks are currently transforming the field of computer algorithms, yet their emulation on current computing substrates is highly inefficient. Reservoir computing was successfully implemented on a large variety of substrates and gave new insight in overcoming this implementation bottleneck. Despite its success, the approach lags behind the state of the art in deep learning. We therefore extend time-delay reservoirs to deep networks and demonstrate that these conceptually correspond to deep convolutional neural networks. Convolution is intrinsically realized on a substrate level by generic drive-response properties of dynamical systems. The resulting novelty is avoiding vector-matrix products between layers, which cause low efficiency in todays substrates. Compared to singleton time-delay reservoirs, our deep network achieves accuracy improvements by at least an order of magnitude in Mackey-Glass and Lorenz timeseries prediction.



قيم البحث

اقرأ أيضاً

Convolutional Neural Networks (CNNs) are a class of Artificial Neural Networks(ANNs) that employ the method of convolving input images with filter-kernels for object recognition and classification purposes. In this paper, we propose a photonics circu it architecture which could consume a fraction of energy per inference compared with state of the art electronics.
We report the performance characteristics of a notional Convolutional Neural Network based on the previously-proposed Multiply-Accumulate-Activate-Pool set, an MTJ-based spintronic circuit made to compute multiple neural functionalities in parallel. A study of image classification with the MNIST handwritten digits dataset using this network is provided via simulation. The effect of changing the weight representation precision, the severity of device process variation within the MAAP sets and the computational redundancy are provided. The emulated network achieves between 90 and 95% image classification accuracy at a cost of ~100 nJ per image.
Future large-scale surveys with high resolution imaging will provide us with a few $10^5$ new strong galaxy-scale lenses. These strong lensing systems however will be contained in large data amounts which are beyond the capacity of human experts to v isually classify in a unbiased way. We present a new strong gravitational lens finder based on convolutional neural networks (CNNs). The method was applied to the Strong Lensing challenge organised by the Bologna Lens Factory. It achieved first and third place respectively on the space-based data-set and the ground-based data-set. The goal was to find a fully automated lens finder for ground-based and space-based surveys which minimizes human inspect. We compare the results of our CNN architecture and three new variations (invariant views and residual) on the simulated data of the challenge. Each method has been trained separately 5 times on 17 000 simulated images, cross-validated using 3 000 images and then applied to a 100 000 image test set. We used two different metrics for evaluation, the area under the receiver operating characteristic curve (AUC) score and the recall with no false positive ($mathrm{Recall}_{mathrm{0FP}}$). For ground based data our best method achieved an AUC score of $0.977$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.50$. For space-based data our best method achieved an AUC score of $0.940$ and a $mathrm{Recall}_{mathrm{0FP}}$ of $0.32$. On space-based data adding dihedral invariance to the CNN architecture diminished the overall score but achieved a higher no contamination recall. We found that using committees of 5 CNNs produce the best recall at zero contamination and consistenly score better AUC than a single CNN. We found that for every variation of our CNN lensfinder, we achieve AUC scores close to $1$ within $6%$.
We propose a new network architecture for standard spin-Hall magnetic tunnel junction-based spintronic neurons that allows them to compute multiple critical convolutional neural network functionalities simultaneously and in parallel, saving space and time. An approximation to the Rectified Linear Unit transfer function and the local pooling function are computed simultaneously with the convolution operation itself. A proof-of-concept simulation is performed on the MNIST dataset, achieving up to 98% accuracy at a cost of less than 1 nJ for all convolution, activation and pooling operations combined. The simulations are remarkably robust to thermal noise, performing well even with very small magnetic layers.
We study the nonlinear dynamics of two delay-coupled neural systems each modelled by excitable dynamics of FitzHugh-Nagumo type and demonstrate that bistability between the stable fixed point and limit cycle oscillations occurs for sufficiently large delay times and coupling strength. As the mechanism for these delay-induced oscillations we identify a saddle-node bifurcation of limit cycles.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا