ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning from Higher-Layer Feature Visualizations

210   0   0.0 ( 0 )
 نشر من قبل Konstantinos Nikolaidis
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Driven by the goal to enable sleep apnea monitoring and machine learning-based detection at home with small mobile devices, we investigate whether interpretation-based indirect knowledge transfer can be used to create classifiers with acceptable performance. Interpretation-based indirect knowledge transfer means that a classifier (student) learns from a synthetic dataset based on the knowledge representation from an already trained Deep Network (teacher). We use activation maximization to generate visualizations and create a synthetic dataset to train the student classifier. This approach has the advantage that student classifiers can be trained without access to the original training data. With experiments we investigate the feasibility of interpretation-based indirect knowledge transfer and its limitations. The student achieves an accuracy of 97.8% on MNIST (teacher accuracy: 99.3%) with a similar smaller architecture to that of the teacher. The student classifier achieves an accuracy of 86.1% and 89.5% for a subset of the Apnea-ECG dataset (teacher: 89.5% and 91.1%, respectively).

قيم البحث

اقرأ أيضاً

Space-time visualizations of macroscopic or microscopic traffic variables is a qualitative tool used by traffic engineers to understand and analyze different aspects of road traffic dynamics. We present a deep learning method to learn the macroscopic traffic speed dynamics from these space-time visualizations, and demonstrate its application in the framework of traffic state estimation. Compared to existing estimation approaches, our approach allows a finer estimation resolution, eliminates the dependence on the initial conditions, and is agnostic to external factors such as traffic demand, road inhomogeneities and driving behaviors. Our model respects causality in traffic dynamics, which improves the robustness of estimation. We present the high-resolution traffic speed fields estimated for several freeway sections using the data obtained from the Next Generation Simulation Program (NGSIM) and German Highway (HighD) datasets. We further demonstrate the quality and utility of the estimation by inferring vehicle trajectories from the estimated speed fields, and discuss the benefits of deep neural network models in approximating the traffic dynamics.
Nonlinear kernels can be approximated using finite-dimensional feature maps for efficient risk minimization. Due to the inherent trade-off between the dimension of the (mapped) feature space and the approximation accuracy, the key problem is to ident ify promising (explicit) features leading to a satisfactory out-of-sample performance. In this work, we tackle this problem by efficiently choosing such features from multiple kernels in a greedy fashion. Our method sequentially selects these explicit features from a set of candidate features using a correlation metric. We establish an out-of-sample error bound capturing the trade-off between the error in terms of explicit features (approximation error) and the error due to spectral properties of the best model in the Hilbert space associated to the combined kernel (spectral error). The result verifies that when the (best) underlying data model is sparse enough, i.e., the spectral error is negligible, one can control the test error with a small number of explicit features, that can scale poly-logarithmically with data. Our empirical results show that given a fixed number of explicit features, the method can achieve a lower test error with a smaller time cost, compared to the state-of-the-art in data-dependent random features.
120 - Tao Bai , Jinnan Chen , Jun Zhao 2020
Deep learning models are shown to be vulnerable to adversarial examples. Though adversarial training can enhance model robustness, typical approaches are computationally expensive. Recent works proposed to transfer the robustness to adversarial attac ks across different tasks or models with soft labels.Compared to soft labels, feature contains rich semantic information and holds the potential to be applied to different downstream tasks. In this paper, we propose a novel approach called Guided Adversarial Contrastive Distillation (GACD), to effectively transfer adversarial robustness from teacher to student with features. We first formulate this objective as contrastive learning and connect it with mutual information. With a well-trained teacher model as an anchor, students are expected to extract features similar to the teacher. Then considering the potential errors made by teachers, we propose sample reweighted estimation to eliminate the negative effects from teachers. With GACD, the student not only learns to extract robust features, but also captures structural knowledge from the teacher. By extensive experiments evaluating over popular datasets such as CIFAR-10, CIFAR-100 and STL-10, we demonstrate that our approach can effectively transfer robustness across different models and even different tasks, and achieve comparable or better results than existing methods. Besides, we provide a detailed analysis of various methods, showing that students produced by our approach capture more structural knowledge from teachers and learn more robust features under adversarial attacks.
Data augmentation is usually used by supervised learning approaches for offline writer identification, but such approaches require extra training data and potentially lead to overfitting errors. In this study, a semi-supervised feature learning pipel ine was proposed to improve the performance of writer identification by training with extra unlabeled data and the original labeled data simultaneously. Specifically, we proposed a weighted label smoothing regularization (WLSR) method for data augmentation, which assigned the weighted uniform label distribution to the extra unlabeled data. The WLSR method could regularize the convolutional neural network (CNN) baseline to allow more discriminative features to be learned to represent the properties of different writing styles. The experimental results on well-known benchmark datasets (ICDAR2013 and CVL) showed that our proposed semi-supervised feature learning approach could significantly improve the baseline measurement and perform competitively with existing writer identification approaches. Our findings provide new insights into offline write identification.
In this paper, we present a novel unsupervised feature learning architecture, which consists of a multi-clustering integration module and a variant of RBM termed multi-clustering integration RBM (MIRBM). In the multi-clustering integration module, we apply three unsupervised K-means, affinity propagation and spectral clustering algorithms to obtain three different clustering partitions (CPs) without any background knowledge or label. Then, an unanimous voting strategy is used to generate a local clustering partition (LCP). The novel MIRBM model is a core feature encoding part of the proposed unsupervised feature learning architecture. The novelty of it is that the LCP as an unsupervised guidance is integrated into one step contrastive divergence (CD1) learning to guide the distribution of the hidden layer features. For the instance in the same LCP cluster, the hidden and reconstructed hidden layer features of the MIRBM model in the proposed architecture tend to constrict together in the training process. Meanwhile, each LCP center tends to disperse from each other as much as possible in the hidden and reconstructed hidden layer during training. The experiments demonstrate that the proposed unsupervised feature learning architecture has more powerful feature representation and generalization capability than the state-of-the-art graph regularized RBM (GraphRBM) for clustering tasks in the Microsoft Research Asia Multimedia (MSRA-MM)2.0 dataset.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا