ترغب بنشر مسار تعليمي؟ اضغط هنا

Boost AI Power: Data Augmentation Strategies with unlabelled Data and Conformal Prediction, a Case in Alternative Herbal Medicine Discrimination with Electronic Nose

104   0   0.0 ( 0 )
 نشر من قبل Li Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Electronic nose has been proven to be effective in alternative herbal medicine classification, but due to the nature of supervised learning, previous research heavily relies on the labelled training data, which are time-costly and labor-intensive to collect. To alleviate the critical dependency on the training data in real-world applications, this study aims to improve classification accuracy via data augmentation strategies. The effectiveness of five data augmentation strategies under different training data inadequacy are investigated in two scenarios: the noise-free scenario where different availabilities of unlabelled data were considered, and the noisy scenario where different levels of Gaussian noises and translational shifts were added to represent sensor drifts. The five augmentation strategies, namely noise-adding data augmentation, semi-supervised learning, classifier-based online learning, Inductive Conformal Prediction (ICP) online learning and our novel ensemble ICP online learning proposed in this study, are experimented and compared against supervised learning baseline, with Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) as the classifiers. Our novel strategy, ensemble ICP online learning, outperforms the others by showing non-decreasing classification accuracy on all tasks and a significant improvement on most simulated tasks (25out of 36 tasks,p<=0.05). Furthermore, this study provides a systematic analysis of different augmentation strategies. It shows at least one strategy significantly improved the classification accuracy with LDA (p<=0.05) and non-decreasing classification accuracy with SVM in each task. In particular, our proposed strategy demonstrated both effectiveness and robustness in boosting the classification model generalizability, which can be employed in other machine learning applications.

قيم البحث

اقرأ أيضاً

In machine learning applications, the reliability of predictions is significant for assisted decision and risk control. As an effective framework to quantify the prediction reliability, conformal prediction (CP) was developed with the CPKNN (CP with kNN). However, the conventional CPKNN suffers from high variance and bias and long computational time as the feature dimensionality increases. To address these limitations, a new CP framework-conformal prediction with shrunken centroids (CPSC) is proposed. It regularizes the class centroids to attenuate the irrelevant features and shrink the sample space for predictions and reliability quantification. To compare CPKNN and CPSC, we employed them in the classification of 12 categories of alternative herbal medicine with electronic nose as a case and assessed them in two tasks: 1) offline prediction: the training set was fixed and the accuracy on the testing set was evaluated; 2) online prediction with data augmentation: they filtered unlabeled data to augment the training data based on the prediction reliability and the final accuracy of testing set was compared. The result shows that CPSC significantly outperformed CPKNN in both two tasks: 1) CPSC reached a significantly higher accuracy with lower computation cost, and with the same credibility output, CPSC generally achieves a higher accuracy; 2) the data augmentation process with CPSC robustly manifested a statistically significant improvement in prediction accuracy with different reliability thresholds, and the augmented data were more balanced in classes. This novel CPSC provides higher prediction accuracy and better reliability quantification, which can be a reliable assistance in decision support.
The origins of herbal medicines are important for their treatment effect, which could be potentially distinguished by electronic nose system. As the odor fingerprint of herbal medicines from different origins can be tiny, the discrimination of origin s can be much harder than that of different categories. Better feature extraction methods are significant for this task to be more accurately done, but there lacks systematic studies on different feature extraction methods. In this study, we classified different origins of three categories of herbal medicines with different feature extraction methods: manual feature extraction, mathematical transformation, deep learning algorithms. With 50 repetitive experiments with bootstrapping, we compared the effectiveness of the extractions with a two-layer neural network w/o dimensionality reduction methods (principal component analysis, linear discriminant analysis) as the three base classifiers. Compared with the conventional aggregated features, the Fast Fourier Transform method and our novel approach (longitudinal-information-in-a-line) showed an significant accuracy improvement(p < 0.05) on all 3 base classifiers and all three herbal medicine categories. Two of the deep learning algorithm we applied also showed partially significant improvement: one-dimensional convolution neural network(1D-CNN) and a novel graph pooling based framework - multivariate time pooling(MTPool).
Data augmentation by mixing samples, such as Mixup, has widely been used typically for classification tasks. However, this strategy is not always effective due to the gap between augmented samples for training and original samples for testing. This g ap may prevent a classifier from learning the optimal decision boundary and increase the generalization error. To overcome this problem, we propose an alternative framework called Data Interpolating Prediction (DIP). Unlike common data augmentations, we encapsulate the sample-mixing process in the hypothesis class of a classifier so that train and test samples are treated equally. We derive the generalization bound and show that DIP helps to reduce the original Rademacher complexity. Also, we empirically demonstrate that DIP can outperform existing Mixup.
Medical imaging is a domain which suffers from a paucity of manually annotated data for the training of learning algorithms. Manually delineating pathological regions at a pixel level is a time consuming process, especially in 3D images, and often re quires the time of a trained expert. As a result, supervised machine learning solutions must make do with small amounts of labelled data, despite there often being additional unlabelled data available. Whilst of less value than labelled images, these unlabelled images can contain potentially useful information. In this paper we propose combining both labelled and unlabelled data within a GAN framework, before using the resulting network to produce images for use when training a segmentation network. We explore the task of deep grey matter multi-class segmentation in an AD dataset and show that the proposed method leads to a significant improvement in segmentation results, particularly in cases where the amount of labelled data is restricted. We show that this improvement is largely driven by a greater ability to segment the structures known to be the most affected by AD, thereby demonstrating the benefits of exposing the system to more examples of pathological anatomical variation. We also show how a shift in domain of the training data from young and healthy towards older and more pathological examples leads to better segmentations of the latter cases, and that this leads to a significant improvement in the ability for the computed segmentations to stratify cases of AD.
171 - Xue Liu , Dan Sun , Wei Wei 2021
The Graph Convolutional Networks (GCNs) proposed by Kipf and Welling are effective models for semi-supervised learning, but facing the obstacle of over-smoothing, which will weaken the representation ability of GCNs. Recently some works are proposed to tackle with above limitation by randomly perturbing graph topology or feature matrix to generate data augmentations as input for training. However, these operations have to pay the price of information structure integrity breaking, and inevitably sacrifice information stochastically from original graph. In this paper, we introduce a novel graph entropy definition as an quantitative index to evaluate feature information diffusion among a graph. Under considerations of preserving graph entropy, we propose an effective strategy to generate perturbed training data using a stochastic mechanism but guaranteeing graph topology integrity and with only a small amount of graph entropy decaying. Extensive experiments have been conducted on real-world datasets and the results verify the effectiveness of our proposed method in improving semi-supervised node classification accuracy compared with a surge of baselines. Beyond that, our proposed approach significantly enhances the robustness and generalization ability of GCNs during the training process.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا