ﻻ يوجد ملخص باللغة العربية
The smartphone and laptop can be unlocked by face or fingerprint recognition, while neural networks which confront numerous requests every day have little capability to distinguish between untrustworthy and credible users. It makes model risky to be traded as a commodity. Existed research either focuses on the intellectual property rights ownership of the commercialized model, or traces the source of the leak after pirated models appear. Nevertheless, active identifying users legitimacy before predicting output has not been considered yet. In this paper, we propose Model-Lock (M-LOCK) to realize an end-to-end neural network with local dynamic access control, which is similar to the automatic locking function of the smartphone to prevent malicious attackers from obtaining available performance actively when you are away. Three kinds of model training strategy are essential to achieve the tremendous performance divergence between certified and suspect input in one neural network. Extensive experiments based on MNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN and GTSRB datasets demonstrated the feasibility and effectiveness of the proposed scheme.
Eye movement patterns reflect human latent internal cognitive activities. We aim to discover eye movement patterns during face recognition under different cognitions of information concealing. These cognitions include the degrees of face familiarity
In recent years, deep learning poses a deep technical revolution in almost every field and attracts great attentions from industry and academia. Especially, the convolutional neural network (CNN), one representative model of deep learning, achieves g
A huge systematics of femtoscopic measurements have been used over the past 20 years to characterize the system created in heavy ion collisions. These measurements cover two orders of magnitude in energy, and with LHC beams imminent, this range will
Deep neural networks (DNNs) have achieved tremendous success in many tasks of machine learning, such as the image classification. Unfortunately, researchers have shown that DNNs are easily attacked by adversarial examples, slightly perturbed images w
Recent works have shown that interval bound propagation (IBP) can be used to train verifiably robust neural networks. Reseachers observe an intriguing phenomenon on these IBP trained networks: CROWN, a bounding method based on tight linear relaxation