ترغب بنشر مسار تعليمي؟ اضغط هنا

A Few Guidelines for Incremental Few-Shot Segmentation

80   0   0.0 ( 0 )
 نشر من قبل Fabio Cermelli
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Reducing the amount of supervision required by neural networks is especially important in the context of semantic segmentation, where collecting dense pixel-level annotations is particularly expensive. In this paper, we address this problem from a new perspective: Incremental Few-Shot Segmentation. In particular, given a pretrained segmentation model and few images containing novel classes, our goal is to learn to segment novel classes while retaining the ability to segment previously seen ones. In this context, we discover, against all beliefs, that fine-tuning the whole architecture with these few images is not only meaningful, but also very effective. We show how the main problems of end-to-end training in this scenario are i) the drift of the batch-normalization statistics toward novel classes that we can fix with batch renormalization and ii) the forgetting of old classes, that we can fix with regularization strategies. We summarize our findings with five guidelines that together consistently lead to the state of the art on the COCO and Pascal-VOC 2012 datasets, with different number of images per class and even with multiple learning episodes.



قيم البحث

اقرأ أيضاً

The ability to incrementally learn new classes is crucial to the development of real-world artificial intelligence systems. In this paper, we focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem. FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones. To address this problem, we represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes. On this basis, we propose the TOpology-Preserving knowledge InCrementer (TOPIC) framework. TOPIC mitigates the forgetting of the old classes by stabilizing NGs topology and improves the representation learning for few-shot new classes by growing and adapting NG to new training samples. Comprehensive experimental results demonstrate that our proposed method significantly outperforms other state-of-the-art class-incremental learning methods on CIFAR100, miniImageNet, and CUB200 datasets.
Conventional detection networks usually need abundant labeled training samples, while humans can learn new concepts incrementally with just a few examples. This paper focuses on a more challenging but realistic class-incremental few-shot object detec tion problem (iFSD). It aims to incrementally transfer the model for novel objects from only a few annotated samples without catastrophically forgetting the previously learned ones. To tackle this problem, we propose a novel method LEAST, which can transfer with Less forgetting, fEwer training resources, And Stronger Transfer capability. Specifically, we first present the transfer strategy to reduce unnecessary weight adaptation and improve the transfer capability for iFSD. On this basis, we then integrate the knowledge distillation technique using a less resource-consuming approach to alleviate forgetting and propose a novel clustering-based exemplar selection process to preserve more discriminative features previously learned. Being a generic and effective method, LEAST can largely improve the iFSD performance on various benchmarks.
Few-shot semantic segmentation aims at learning to segment a target object from a query image using only a few annotated support images of the target class. This challenging task requires to understand diverse levels of visual cues and analyze fine-g rained correspondence relations between the query and the support images. To address the problem, we propose Hypercorrelation Squeeze Networks (HSNet) that leverages multi-level feature correlation and efficient 4D convolutions. It extracts diverse features from different levels of intermediate convolutional layers and constructs a collection of 4D correlation tensors, i.e., hypercorrelations. Using efficient center-pivot 4D convolutions in a pyramidal architecture, the method gradually squeezes high-level semantic and low-level geometric cues of the hypercorrelation into precise segmentation masks in coarse-to-fine manner. The significant performance improvements on standard few-shot segmentation benchmarks of PASCAL-5i, COCO-20i, and FSS-1000 verify the efficacy of the proposed method.
141 - Jinlu Liu , Yongqiang Qin 2020
Few-shot segmentation targets to segment new classes with few annotated images provided. It is more challenging than traditional semantic segmentation tasks that segment known classes with abundant annotated images. In this paper, we propose a Protot ype Refinement Network (PRNet) to attack the challenge of few-shot segmentation. It firstly learns to bidirectionally extract prototypes from both support and query images of the known classes. Furthermore, to extract representative prototypes of the new classes, we use adaptation and fusion for prototype refinement. The step of adaptation makes the model to learn new concepts which is directly implemented by retraining. Prototype fusion is firstly proposed which fuses support prototypes with query prototypes, incorporating the knowledge from both sides. It is effective in prototype refinement without importing extra learnable parameters. In this way, the prototypes become more discriminative in low-data regimes. Experiments on PASAL-$5^i$ and COCO-$20^i$ demonstrate the superiority of our method. Especially on COCO-$20^i$, PRNet significantly outperforms existing methods by a large margin of 13.1% in 1-shot setting.
107 - Kai Zhu , Wei Zhai , Zheng-Jun Zha 2020
Few-shot segmentation aims at assigning a category label to each image pixel with few annotated samples. It is a challenging task since the dense prediction can only be achieved under the guidance of latent features defined by sparse annotations. Exi sting meta-learning method tends to fail in generating category-specifically discriminative descriptor when the visual features extracted from support images are marginalized in embedding space. To address this issue, this paper presents an adaptive tuning framework, in which the distribution of latent features across different episodes is dynamically adjusted based on a self-segmentation scheme, augmenting category-specific descriptors for label prediction. Specifically, a novel self-supervised inner-loop is firstly devised as the base learner to extract the underlying semantic features from the support image. Then, gradient maps are calculated by back-propagating self-supervised loss through the obtained features, and leveraged as guidance for augmenting the corresponding elements in embedding space. Finally, with the ability to continuously learn from different episodes, an optimization-based meta-learner is adopted as outer loop of our proposed framework to gradually refine the segmentation results. Extensive experiments on benchmark PASCAL-$5^{i}$ and COCO-$20^{i}$ datasets demonstrate the superiority of our proposed method over state-of-the-art.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا