ترغب بنشر مسار تعليمي؟ اضغط هنا

Prototype Mixture Models for Few-shot Semantic Segmentation

96   0   0.0 ( 0 )
 نشر من قبل Boyu Yang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Few-shot segmentation is challenging because objects within the support and query images could significantly differ in appearance and pose. Using a single prototype acquired directly from the support image to segment the query image causes semantic ambiguity. In this paper, we propose prototype mixture models (PMMs), which correlate diverse image regions with multiple prototypes to enforce the prototype-based semantic representation. Estimated by an Expectation-Maximization algorithm, PMMs incorporate rich channel-wised and spatial semantics from limited support images. Utilized as representations as well as classifiers, PMMs fully leverage the semantics to activate objects in the query image while depressing background regions in a duplex manner. Extensive experiments on Pascal VOC and MS-COCO datasets show that PMMs significantly improve upon state-of-the-arts. Particularly, PMMs improve 5-shot segmentation performance on MS-COCO by up to 5.82% with only a moderate cost for model size and inference speed.



قيم البحث

اقرأ أيضاً

This paper aims to address few-shot semantic segmentation. While existing prototype-based methods have achieved considerable success, they suffer from uncertainty and ambiguity caused by limited labelled examples. In this work, we propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot semantic segmentation. We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution. The probabilistic modeling of the prototype enhances the models generalization ability by handling the inherent uncertainty caused by limited data and intra-class variations of objects. To further enhance the model, we introduce a local latent variable to represent the attention map of each query image, which enables the model to attend to foreground objects while suppressing background. The optimization of the proposed model is formulated as a variational Bayesian inference problem, which is established by amortized inference networks.We conduct extensive experiments on three benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art methods. We also provide comprehensive analyses and ablation studies to gain insight into the effectiveness of our method for few-shot semantic segmentation.
Despite the great progress made by deep CNNs in image semantic segmentation, they typically require a large number of densely-annotated images for training and are difficult to generalize to unseen object categories. Few-shot segmentation has thus be en developed to learn to perform segmentation from only a few annotated examples. In this paper, we tackle the challenging few-shot segmentation problem from a metric learning perspective and present PANet, a novel prototype alignment network to better utilize the information of the support set. Our PANet learns class-specific prototype representations from a few support images within an embedding space and then performs segmentation over the query images through matching each pixel to the learned prototypes. With non-parametric metric learning, PANet offers high-quality prototypes that are representative for each semantic class and meanwhile discriminative for different classes. Moreover, PANet introduces a prototype alignment regularization between support and query. With this, PANet fully exploits knowledge from the support and provides better generalization on few-shot segmentation. Significantly, our model achieves the mIoU score of 48.1% and 55.7% on PASCAL-5i for 1-shot and 5-shot settings respectively, surpassing the state-of-the-art method by 1.8% and 8.6%.
141 - Jinlu Liu , Yongqiang Qin 2020
Few-shot segmentation targets to segment new classes with few annotated images provided. It is more challenging than traditional semantic segmentation tasks that segment known classes with abundant annotated images. In this paper, we propose a Protot ype Refinement Network (PRNet) to attack the challenge of few-shot segmentation. It firstly learns to bidirectionally extract prototypes from both support and query images of the known classes. Furthermore, to extract representative prototypes of the new classes, we use adaptation and fusion for prototype refinement. The step of adaptation makes the model to learn new concepts which is directly implemented by retraining. Prototype fusion is firstly proposed which fuses support prototypes with query prototypes, incorporating the knowledge from both sides. It is effective in prototype refinement without importing extra learnable parameters. In this way, the prototypes become more discriminative in low-data regimes. Experiments on PASAL-$5^i$ and COCO-$20^i$ demonstrate the superiority of our method. Especially on COCO-$20^i$, PRNet significantly outperforms existing methods by a large margin of 13.1% in 1-shot setting.
Point cloud segmentation is a fundamental visual understanding task in 3D vision. A fully supervised point cloud segmentation network often requires a large amount of data with point-wise annotations, which is expensive to obtain. In this work, we pr esent the Compositional Prototype Network that can undertake point cloud segmentation with only a few labeled training data. Inspired by the few-shot learning literature in images, our network directly transfers label information from the limited training data to unlabeled test data for prediction. The network decomposes the representations of complex point cloud data into a set of local regional representations and utilizes them to calculate the compositional prototypes of a visual concept. Our network includes a key Multi-View Comparison Component that exploits the redundant views of the support set. To evaluate the proposed method, we create a new segmentation benchmark dataset, ScanNet-$6^i$, which is built upon ScanNet dataset. Extensive experiments show that our method outperforms baselines with a significant advantage. Moreover, when we use our network to handle the long-tail problem in a fully supervised point cloud segmentation dataset, it can also effectively boost the performance of the few-shot classes.
Few-shot semantic segmentation models aim to segment images after learning from only a few annotated examples. A key challenge for them is overfitting. Prior works usually limit the overall model capacity to alleviate overfitting, but the limited cap acity also hampers the segmentation accuracy. We instead propose a method that increases the overall model capacity by supplementing class-specific features with objectness, which is class-agnostic and so not prone to overfitting. Extensive experiments demonstrate the versatility of our method with multiple backbone models (ResNet-50, ResNet-101 and HRNetV2-W48) and existing base architectures (DENet and PFENet). Given only one annotated example of an unseen category, experiments show that our method outperforms state-of-art methods with respect to mIoU by at least 4.7% and 1.5% on PASCAL-5i and COCO-20i respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا