Do you want to publish a course? Click here

My Health Sensor, my Classifier: Adapting a Trained Classifier to Unlabeled End-User Data

97   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this work, we present an approach for unsupervised domain adaptation (DA) with the constraint, that the labeled source data are not directly available, and instead only access to a classifier trained on the source data is provided. Our solution, iteratively labels only high confidence sub-regions of the target data distribution, based on the belief of the classifier. Then it iteratively learns new classifiers from the expanding high-confidence dataset. The goal is to apply the proposed approach on DA for the task of sleep apnea detection and achieve personalization based on the needs of the patient. In a series of experiments with both open and closed sleep monitoring datasets, the proposed approach is applied to data from different sensors, for DA between the different datasets. The proposed approach outperforms in all experiments the classifier trained in the source domain, with an improvement of the kappa coefficient that varies from 0.012 to 0.242. Additionally, our solution is applied to digit classification DA between three well established digit datasets, to investigate the generalizability of the approach, and to allow for comparison with related work. Even without direct access to the source data, it achieves good results, and outperforms several well established unsupervised DA methods.



rate research

Read More

Many real-world tasks involve identifying patterns from data satisfying background or prior knowledge. In domains like materials discovery, due to the flaws and biases in raw experimental data, the identification of X-ray diffraction patterns (XRD) often requires a huge amount of manual work in finding refined phases that are similar to the ideal theoretical ones. Automatically refining the raw XRDs utilizing the simulated theoretical data is thus desirable. We propose imitation refinement, a novel approach to refine imperfect input patterns, guided by a pre-trained classifier incorporating prior knowledge from simulated theoretical data, such that the refined patterns imitate the ideal data. The classifier is trained on the ideal simulated data to classify patterns and learns an embedding space where each class is represented by a prototype. The refiner learns to refine the imperfect patterns with small modifications, such that their embeddings are closer to the corresponding prototypes. We show that the refiner can be trained in both supervised and unsupervised fashions. We further illustrate the effectiveness of the proposed approach both qualitatively and quantitatively in a digit refinement task and an X-ray diffraction pattern refinement task in materials discovery.
We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.
We consider a problem of learning a binary classifier only from positive data and unlabeled data (PU learning) and estimating the class-prior in unlabeled data under the case-control scenario. Most of the recent methods of PU learning require an estimate of the class-prior probability in unlabeled data, and it is estimated in advance with another method. However, such a two-step approach which first estimates the class prior and then trains a classifier may not be the optimal approach since the estimation error of the class-prior is not taken into account when a classifier is trained. In this paper, we propose a novel unified approach to estimating the class-prior and training a classifier alternately. Our proposed method is simple to implement and computationally efficient. Through experiments, we demonstrate the practical usefulness of the proposed method.
76 - Liang Huang , You Zhang 2020
Deep learning has recently been successfully applied in automatic modulation classification by extracting and classifying radio features in an end-to-end way. However, deep learning-based radio modulation classifiers are lack of interpretability, and there is little explanation or visibility into what kinds of radio features are extracted and chosen for classification. In this paper, we visualize different deep learning-based radio modulation classifiers by introducing a class activation vector. Specifically, both convolutional neural networks (CNN) based classifier and long short-term memory (LSTM) based classifier are separately studied, and their extracted radio features are visualized. Extensive numerical results show both the CNN-based classifier and LSTM-based classifier extract similar radio features relating to modulation reference points. In particular, for the LSTM-based classifier, its obtained radio features are similar to the knowledge of human experts. Our numerical results indicate the radio features extracted by deep learning-based classifiers greatly depend on the contents carried by radio signals, and a short radio sample may lead to misclassification.
We introduce a hybrid model combining a quantum-inspired tensor network and a variational quantum circuit to perform supervised learning tasks. This architecture allows for the classical and quantum parts of the model to be trained simultaneously, providing an end-to-end training framework. We show that compared to the principal component analysis, a tensor network based on the matrix product state with low bond dimensions performs better as a feature extractor for the input data of the variational quantum circuit in the binary and ternary classification of MNIST and Fashion-MNIST datasets. The architecture is highly adaptable and the classical-quantum boundary can be adjusted according the availability of the quantum resource by exploiting the correspondence between tensor networks and quantum circuits.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا