ﻻ يوجد ملخص باللغة العربية
Recent semi-supervised learning (SSL) methods are commonly based on pseudo labeling. Since the SSL performance is greatly influenced by the quality of pseudo labels, mutual learning has been proposed to effectively suppress the noises in the pseudo supervision. In this work, we propose robust mutual learning that improves the prior approach in two aspects. First, the vanilla mutual learners suffer from the coupling issue that models may converge to learn homogeneous knowledge. We resolve this issue by introducing mean teachers to generate mutual supervisions so that there is no direct interaction between the two students. We also show that strong data augmentations, model noises and heterogeneous network architectures are essential to alleviate the model coupling. Second, we notice that mutual learning fails to leverage the networks own ability for pseudo label refinement. Therefore, we introduce self-rectification that leverages the internal knowledge and explicitly rectifies the pseudo labels before the mutual teaching. Such self-rectification and mutual teaching collaboratively improve the pseudo label accuracy throughout the learning. The proposed robust mutual learning demonstrates state-of-the-art performance on semantic segmentation in low-data regime.
We present a novel semi-supervised semantic segmentation method which jointly achieves two desiderata of segmentation model regularities: the label-space consistency property between image augmentations and the feature-space contrastive property amon
Recent semi-supervised learning methods use pseudo supervision as core idea, especially self-training methods that generate pseudo labels. However, pseudo labels are unreliable. Self-training methods usually rely on single model prediction confidence
Semi-supervised learning has attracted great attention in the field of machine learning, especially for medical image segmentation tasks, since it alleviates the heavy burden of collecting abundant densely annotated data for training. However, most o
In this paper, we study the semi-supervised semantic segmentation problem via exploring both labeled data and extra unlabeled data. We propose a novel consistency regularization approach, called cross pseudo supervision (CPS). Our approach imposes th
This paper addresses semi-supervised semantic segmentation by exploiting a small set of images with pixel-level annotations (strong supervisions) and a large set of images with only image-level annotations (weak supervisions). Most existing approache