ترغب بنشر مسار تعليمي؟ اضغط هنا

Automatically Lock Your Neural Networks When Youre Away

61   0   0.0 ( 0 )
 نشر من قبل Ge Ren
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The smartphone and laptop can be unlocked by face or fingerprint recognition, while neural networks which confront numerous requests every day have little capability to distinguish between untrustworthy and credible users. It makes model risky to be traded as a commodity. Existed research either focuses on the intellectual property rights ownership of the commercialized model, or traces the source of the leak after pirated models appear. Nevertheless, active identifying users legitimacy before predicting output has not been considered yet. In this paper, we propose Model-Lock (M-LOCK) to realize an end-to-end neural network with local dynamic access control, which is similar to the automatic locking function of the smartphone to prevent malicious attackers from obtaining available performance actively when you are away. Three kinds of model training strategy are essential to achieve the tremendous performance divergence between certified and suspect input in one neural network. Extensive experiments based on MNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN and GTSRB datasets demonstrated the feasibility and effectiveness of the proposed scheme.



قيم البحث

اقرأ أيضاً

Eye movement patterns reflect human latent internal cognitive activities. We aim to discover eye movement patterns during face recognition under different cognitions of information concealing. These cognitions include the degrees of face familiarity and deception or not, namely telling the truth when observing familiar and unfamiliar faces, and deceiving in front of familiar faces. We apply Hidden Markov models with Gaussian emission to generalize regions and trajectories of eye fixation points under the above three conditions. Our results show that both eye movement patterns and eye gaze regions become significantly different during deception compared with truth-telling. We show the feasibility of detecting deception and further cognitive activity classification using eye movement patterns.
140 - Mao Yang , Bo Li , Guanxiong Feng 2018
In recent years, deep learning poses a deep technical revolution in almost every field and attracts great attentions from industry and academia. Especially, the convolutional neural network (CNN), one representative model of deep learning, achieves g reat successes in computer vision and natural language processing. However, simply or blindly applying CNN to the other fields results in lower training effects or makes it quite difficult to adjust the model parameters. In this poster, we propose a general methodology named V-CNN by introducing data visualizing for CNN. V-CNN introduces a data visualization model prior to CNN modeling to make sure the data after processing is fit for the features of images as well as CNN modeling. We apply V-CNN to the network intrusion detection problem based on a famous practical dataset: AWID. Simulation results confirm V-CNN significantly outperforms other studies and the recall rate of each invasion category is more than 99.8%.
76 - Mike Lisa 2007
A huge systematics of femtoscopic measurements have been used over the past 20 years to characterize the system created in heavy ion collisions. These measurements cover two orders of magnitude in energy, and with LHC beams imminent, this range will be extended by more than another order of magnitude. Here, I discuss theoretical expectations of femtoscopy of $A+A$ and $p+p$ collisions at the LHC, based on Boltzmann and hydrodynamic calculations, as well as on naive extrapolation of existing systematics.
Deep neural networks (DNNs) have achieved tremendous success in many tasks of machine learning, such as the image classification. Unfortunately, researchers have shown that DNNs are easily attacked by adversarial examples, slightly perturbed images w hich can mislead DNNs to give incorrect classification results. Such attack has seriously hampered the deployment of DNN systems in areas where security or safety requirements are strict, such as autonomous cars, face recognition, malware detection. Defensive distillation is a mechanism aimed at training a robust DNN which significantly reduces the effectiveness of adversarial examples generation. However, the state-of-the-art attack can be successful on distilled networks with 100% probability. But it is a white-box attack which needs to know the inner information of DNN. Whereas, the black-box scenario is more general. In this paper, we first propose the epsilon-neighborhood attack, which can fool the defensively distilled networks with 100% success rate in the white-box setting, and it is fast to generate adversarial examples with good visual quality. On the basis of this attack, we further propose the region-based attack against defensively distilled DNNs in the black-box setting. And we also perform the bypass attack to indirectly break the distillation defense as a complementary method. The experimental results show that our black-box attacks have a considerable success rate on defensively distilled networks.
Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation , often gives very loose bounds on these networks. We also observe that most neurons become dead during the IBP training process, which could hurt the representation capability of the network. In this paper, we study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines. We further propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors than IBP. We also design a new activation function, parameterized ramp function (ParamRamp), which has more diversity of neuron status than ReLU. We conduct extensive experiments on MNIST, CIFAR-10 and Tiny-ImageNet with ParamRamp activation and achieve state-of-the-art verified robustness. Code and the appendix are available at https://github.com/ZhaoyangLyu/VerifiablyRobustNN.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا