ترغب بنشر مسار تعليمي؟ اضغط هنا

FastONN -- Python based open-source GPU implementation for Operational Neural Networks

154   0   0.0 ( 0 )
 نشر من قبل Junaid Malik
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Operational Neural Networks (ONNs) have recently been proposed as a special class of artificial neural networks for grid structured data. They enable heterogenous non-linear operations to generalize the widely adopted convolution-based neuron model. This work introduces a fast GPU-enabled library for training operational neural networks, FastONN, which is based on a novel vectorized formulation of the operational neurons. Leveraging on automatic reverse-mode differentiation for backpropagation, FastONN enables increased flexibility with the incorporation of new operator sets and customized gradient flows. Additionally, bundled auxiliary modules offer interfaces for performance tracking and checkpointing across different data partitions and customized metrics.



قيم البحث

اقرأ أيضاً

The recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a ge neralized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an elite ONN can then be configured using the top ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result the performance gap over the CNNs further widens.
Neural networks have shown great potential in many applications like speech recognition, drug discovery, image classification, and object detection. Neural network models are inspired by biological neural networks, but they are optimized to perform m achine learning tasks on digital computers. The proposed work explores the possibilities of using living neural networks in vitro as basic computational elements for machine learning applications. A new supervised STDP-based learning algorithm is proposed in this work, which considers neuron engineering constrains. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten digit recognition.
For the gradient computation across the time domain in Spiking Neural Networks (SNNs) training, two different approaches have been independently studied. The first is to compute the gradients with respect to the change in spike activation (activation -based methods), and the second is to compute the gradients with respect to the change in spike timing (timing-based methods). In this work, we present a comparative study of the two methods and propose a new supervised learning method that combines them. The proposed method utilizes each individual spike more effectively by shifting spike timings as in the timing-based methods as well as generating and removing spikes as in the activation-based methods. Experimental results showed that the proposed method achieves higher performance in terms of both accuracy and efficiency than the previous approaches.
187 - Mohsen Moradi 2017
In this study we determined neural network weights and biases by Imperialist Competitive Algorithm (ICA) in order to train network for predicting earthquake intensity in Richter. For this reason, we used dependent parameters like earthquake occurrenc e time, epicenters latitude and longitude in degree, focal depth in kilometer, and the seismological center distance from epicenter and earthquake focal center in kilometer which has been provided by Berkeley data base. The studied neural network has two hidden layer: its first layer has 16 neurons and the second layer has 24 neurons. By using ICA algorithm, average error for testing data is 0.0007 with a variance equal to 0.318. The earthquake prediction error in Richter by MSE criteria for ICA algorithm is 0.101, but by using GA, the MSE value is 0.115.
We study black-box adversarial attacks for image classifiers in a constrained threat model, where adversaries can only modify a small fraction of pixels in the form of scratches on an image. We show that it is possible for adversaries to generate loc alized textit{adversarial scratches} that cover less than $5%$ of the pixels in an image and achieve targeted success rates of $98.77%$ and $97.20%$ on ImageNet and CIFAR-10 trained ResNet-50 models, respectively. We demonstrate that our scratches are effective under diverse shapes, such as straight lines or parabolic Baezier curves, with single or multiple colors. In an extreme condition, in which our scratches are a single color, we obtain a targeted attack success rate of $66%$ on CIFAR-10 with an order of magnitude fewer queries than comparable attacks. We successfully launch our attack against Microsofts Cognitive Services Image Captioning API and propose various mitigation strategies.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا