ترغب بنشر مسار تعليمي؟ اضغط هنا

Study of the robustness of neural networks based on spintronic neurons

55   0   0.0 ( 0 )
 نشر من قبل Eleonora Raimondo
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Spintronic technology is emerging as a direction for the hardware implementation of neurons and synapses of neuromorphic architectures. In particular, a single spintronic device can be used to implement the nonlinear activation function of neurons. Here, we propose how to implement spintronic neurons with a sigmoidal and ReLU-like activation functions. We then perform a numerical experiment showing the robustness of neural networks made by spintronic neurons all having different activation functions to emulate device-to-device variations in a possible hardware implementation of the network. Therefore, we consider a vanilla neural network implemented to recognize the categories of the Mixed National Institute of Standards and Technology database, and we show an average accuracy of 98.87 % in the test dataset which is very close to the 98.89% as obtained for the ideal case (all neurons have the same sigmoid activation function). Similar results are also obtained with neurons having a ReLU-like activation function.


قيم البحث

اقرأ أيضاً

Obtaining the state of the art performance of deep learning models imposes a high cost to model generators, due to the tedious data preparation and the substantial processing requirements. To protect the model from unauthorized re-distribution, water marking approaches have been introduced in the past couple of years. We investigate the robustness and reliability of state-of-the-art deep neural network watermarking schemes. We focus on backdoor-based watermarking and propose two -- a black-box and a white-box -- attacks that remove the watermark. Our black-box attack steals the model and removes the watermark with minimum requirements; it just relies on public unlabeled data and a black-box access to the classification label. It does not need classification confidences or access to the models sensitive information such as the training data set, the trigger set or the model parameters. The white-box attack, proposes an efficient watermark removal when the parameters of the marked model are available; our white-box attack does not require access to the labeled data or the trigger set and improves the runtime of the black-box attack up to seventeen times. We as well prove the security inadequacy of the backdoor-based watermarking in keeping the watermark undetectable by proposing an attack that detects whether a model contains a watermark. Our attacks show that a recipient of a marked model can remove a backdoor-based watermark with significantly less effort than training a new model and some other techniques are needed to protect against re-distribution by a motivated attacker.
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse th e geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks.
Spintronic diodes are emerging as disruptive candidates for impacting several technological applications ranging from the Internet of Things to Artificial Intelligence. In this letter, an overview of the recent achievements on spintronic diodes is br iefly presented, underling the major breakthroughs that have led these devices to have the largest sensitivity measured up to date for a diode. For each class of spintronic diodes (passive, active, resonant, non-resonant), we indicate the remaining developments to improve the performances as well as the future directions. We also dedicate the last part of this perspective to new ideas for developing spintronic diodes in multiphysics systems by combining 2-dimensional materials and antiferromagnets.
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts. However, several recent breakthroughs in transfer learning suggest that these networks can cope with severe distribution shifts and succe ssfully adapt to new tasks from a few training examples. In this work we study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time and investigate the impact of the pre-training data size, the model scale, and the data preprocessing pipeline. We find that increasing both the training set and model sizes significantly improve the distributional shift robustness. Furthermore, we show that, perhaps surprisingly, simple changes in the preprocessing such as modifying the image resolution can significantly mitigate robustness issues in some cases. Finally, we outline the shortcomings of existing robustness evaluation datasets and introduce a synthetic dataset SI-Score we use for a systematic analysis across factors of variation common in visual data such as object size and position.
The brain naturally binds events from different sources in unique concepts. It is hypothesized that this process occurs through the transient mutual synchronization of neurons located in different regions of the brain when the stimulus is presented. This mechanism of binding through synchronization can be directly implemented in neural networks composed of coupled oscillators. To do so, the oscillators must be able to mutually synchronize for the range of inputs corresponding to a single class, and otherwise remain desynchronized. Here we show that the outstanding ability of spintronic nano-oscillators to mutually synchronize and the possibility to precisely control the occurrence of mutual synchronization by tuning the oscillator frequencies over wide ranges allows pattern recognition. We demonstrate experimentally on a simple task that three spintronic nano-oscillators can bind consecutive events and thus recognize and distinguish temporal sequences. This work is a step forward in the construction of neural networks that exploit the non-linear dynamic properties of their components to perform brain-inspired computations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا